Warning: Permanently added '18.212.114.89' (ED25519) to the list of known hosts. You can reproduce this build on your computer by running: sudo dnf install copr-rpmbuild /usr/bin/copr-rpmbuild --verbose --drop-resultdir --task-url https://copr.fedorainfracloud.org/backend/get-build-task/9644096-fedora-41-x86_64 --chroot fedora-41-x86_64 Version: 1.6 PID: 8554 Logging PID: 8556 Task: {'allow_user_ssh': False, 'appstream': False, 'background': False, 'build_id': 9644096, 'buildroot_pkgs': [], 'chroot': 'fedora-41-x86_64', 'enable_net': False, 'fedora_review': False, 'git_hash': 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd', 'git_repo': 'https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama', 'isolation': 'default', 'memory_reqs': 2048, 'package_name': 'ollama', 'package_version': '0.12.3-1', 'project_dirname': 'ollama', 'project_name': 'ollama', 'project_owner': 'fachep', 'repo_priority': None, 'repos': [{'baseurl': 'https://download.copr.fedorainfracloud.org/results/fachep/ollama/fedora-41-x86_64/', 'id': 'copr_base', 'name': 'Copr repository', 'priority': None}, {'baseurl': 'https://developer.download.nvidia.cn/compute/cuda/repos/fedora42/x86_64/', 'id': 'https_developer_download_nvidia_cn_compute_cuda_repos_fedora42_x86_64', 'name': 'Additional repo https_developer_download_nvidia_cn_compute_cuda_repos_fedora42_x86_64'}, {'baseurl': 'https://developer.download.nvidia.cn/compute/cuda/repos/fedora41/x86_64/', 'id': 'https_developer_download_nvidia_cn_compute_cuda_repos_fedora41_x86_64', 'name': 'Additional repo https_developer_download_nvidia_cn_compute_cuda_repos_fedora41_x86_64'}], 'sandbox': 'fachep/ollama--fachep', 'source_json': {}, 'source_type': None, 'ssh_public_keys': None, 'storage': 0, 'submitter': 'fachep', 'tags': [], 'task_id': '9644096-fedora-41-x86_64', 'timeout': 18000, 'uses_devel_repo': False, 'with_opts': [], 'without_opts': []} Running: git clone https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama /var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama --depth 500 --no-single-branch --recursive cmd: ['git', 'clone', 'https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama', '/var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama', '--depth', '500', '--no-single-branch', '--recursive'] cwd: . rc: 0 stdout: stderr: Cloning into '/var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama'... Running: git checkout bd90d2d0f4106e3a74de46dced869f2b79bfddfd -- cmd: ['git', 'checkout', 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd', '--'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama rc: 0 stdout: stderr: Note: switching to 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at bd90d2d automatic import of ollama Running: dist-git-client sources cmd: ['dist-git-client', 'sources'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama rc: 0 stdout: stderr: INFO: Reading stdout from command: git rev-parse --abbrev-ref HEAD INFO: Reading stdout from command: git rev-parse HEAD INFO: Reading sources specification file: sources INFO: Downloading ollama-0.12.3.tar.gz INFO: Reading stdout from command: curl --help all INFO: Calling: curl -H Pragma: -o ollama-0.12.3.tar.gz --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/fachep/ollama/ollama/ollama-0.12.3.tar.gz/md5/f096acee5e82596e9afd4d07ed477de2/ollama-0.12.3.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.5M 100 10.5M 0 0 356M 0 --:--:-- --:--:-- --:--:-- 350M INFO: Reading stdout from command: md5sum ollama-0.12.3.tar.gz INFO: Downloading vendor.tar.bz2 INFO: Calling: curl -H Pragma: -o vendor.tar.bz2 --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/fachep/ollama/ollama/vendor.tar.bz2/md5/c608d605610ed47b385cf54a6f6b2a2c/vendor.tar.bz2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6402k 100 6402k 0 0 261M 0 --:--:-- --:--:-- --:--:-- 271M INFO: Reading stdout from command: md5sum vendor.tar.bz2 tail: /var/lib/copr-rpmbuild/main.log: file truncated Running (timeout=18000): unbuffer mock --spec /var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1759552642.334703 -r /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: mock.py version 6.3 starting (python version = 3.13.7, NVR = mock-6.3-1.fc42), args: /usr/libexec/mock/mock --spec /var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1759552642.334703 -r /var/lib/copr-rpmbuild/results/configs/child.cfg Start(bootstrap): init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish(bootstrap): init plugins Start: init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish: init plugins INFO: Signal handler active Start: run INFO: Start(/var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama/ollama.spec) Config(fedora-41-x86_64) Start: clean chroot Finish: clean chroot Mock Version: 6.3 INFO: Mock Version: 6.3 Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-41-x86_64-bootstrap-1759552642.334703/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata INFO: Guessed host environment type: unknown INFO: Using container image: registry.fedoraproject.org/fedora:41 INFO: Pulling image: registry.fedoraproject.org/fedora:41 INFO: Tagging container image as mock-bootstrap-93bc8eca-953f-4fed-879a-2968f5b436e5 INFO: Checking that 53b19ada73de27cf9c1863a00b2d8e43a8e19843c5d2154d152e5a4d242329b4 image matches host's architecture INFO: Copy content of container 53b19ada73de27cf9c1863a00b2d8e43a8e19843c5d2154d152e5a4d242329b4 to /var/lib/mock/fedora-41-x86_64-bootstrap-1759552642.334703/root INFO: mounting 53b19ada73de27cf9c1863a00b2d8e43a8e19843c5d2154d152e5a4d242329b4 with podman image mount INFO: image 53b19ada73de27cf9c1863a00b2d8e43a8e19843c5d2154d152e5a4d242329b4 as /var/lib/containers/storage/overlay/a5caf6e4e15dc7aa336b4af608aa4de0164eb7ee06d8b09043949ac9292dcf34/merged INFO: umounting image 53b19ada73de27cf9c1863a00b2d8e43a8e19843c5d2154d152e5a4d242329b4 (/var/lib/containers/storage/overlay/a5caf6e4e15dc7aa336b4af608aa4de0164eb7ee06d8b09043949ac9292dcf34/merged) with podman image umount INFO: Removing image mock-bootstrap-93bc8eca-953f-4fed-879a-2968f5b436e5 INFO: Package manager dnf5 detected and used (fallback) INFO: Not updating bootstrap chroot, bootstrap_image_ready=True Start(bootstrap): creating root cache Finish(bootstrap): creating root cache Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-41-x86_64-1759552642.334703/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Package manager dnf5 detected and used (direct choice) INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc41.x86_64 rpm-sequoia-1.7.0-5.fc41.x86_64 dnf5-5.2.16.0-1.fc41.x86_64 dnf5-plugins-5.2.16.0-1.fc41.x86_64 Start: installing minimal buildroot with dnf5 Updating and loading repositories: Copr repository 100% | 3.2 KiB/s | 3.1 KiB | 00m01s Additional repo https_developer_downlo 100% | 38.6 KiB/s | 47.8 KiB | 00m01s Additional repo https_developer_downlo 100% | 82.1 KiB/s | 109.0 KiB | 00m01s updates 100% | 11.8 MiB/s | 19.9 MiB | 00m02s fedora 100% | 17.3 MiB/s | 35.6 MiB | 00m02s Repositories loaded. Package Arch Version Repository Size Installing group/module packages: bash x86_64 5.2.32-1.fc41 fedora 8.2 MiB bzip2 x86_64 1.0.8-19.fc41 fedora 95.7 KiB coreutils x86_64 9.5-12.fc41 updates 5.5 MiB cpio x86_64 2.15-2.fc41 fedora 1.1 MiB diffutils x86_64 3.10-8.fc41 fedora 1.6 MiB fedora-release-common noarch 41-33 updates 19.7 KiB findutils x86_64 1:4.10.0-4.fc41 fedora 1.8 MiB gawk x86_64 5.3.0-4.fc41 fedora 1.7 MiB glibc-minimal-langpack x86_64 2.40-28.fc41 updates 0.0 B grep x86_64 3.11-9.fc41 fedora 1.0 MiB gzip x86_64 1.13-2.fc41 fedora 389.0 KiB info x86_64 7.1.1-1.fc41 updates 361.7 KiB patch x86_64 2.7.6-25.fc41 fedora 266.7 KiB redhat-rpm-config noarch 294-1.fc41 updates 183.6 KiB rpm-build x86_64 4.20.1-1.fc41 updates 193.8 KiB sed x86_64 4.9-3.fc41 fedora 861.5 KiB shadow-utils x86_64 2:4.15.1-12.fc41 fedora 4.1 MiB tar x86_64 2:1.35-4.fc41 fedora 2.9 MiB unzip x86_64 6.0-64.fc41 fedora 386.8 KiB util-linux x86_64 2.40.4-1.fc41 updates 3.6 MiB which x86_64 2.21-42.fc41 fedora 80.2 KiB xz x86_64 1:5.8.1-2.fc41 updates 1.3 MiB Installing dependencies: add-determinism x86_64 0.3.6-3.fc41 updates 2.4 MiB alternatives x86_64 1.31-1.fc41 updates 64.8 KiB ansible-srpm-macros noarch 1-16.fc41 fedora 35.7 KiB audit-libs x86_64 4.1.1-1.fc41 updates 387.1 KiB authselect x86_64 1.5.0-8.fc41 fedora 157.6 KiB authselect-libs x86_64 1.5.0-8.fc41 fedora 822.2 KiB basesystem noarch 11-21.fc41 fedora 0.0 B binutils x86_64 2.43.1-8.fc41 updates 27.5 MiB build-reproducibility-srpm-macros noarch 0.3.6-3.fc41 updates 735.0 B bzip2-libs x86_64 1.0.8-19.fc41 fedora 80.7 KiB ca-certificates noarch 2024.2.69_v8.0.401-1.0.fc41 fedora 2.4 MiB coreutils-common x86_64 9.5-12.fc41 updates 11.2 MiB cracklib x86_64 2.9.11-6.fc41 fedora 238.9 KiB crypto-policies noarch 20250707-1.git836bbee.fc41 updates 143.8 KiB curl x86_64 8.9.1-4.fc41 updates 796.2 KiB cyrus-sasl-lib x86_64 2.1.28-27.fc41 fedora 2.3 MiB debugedit x86_64 5.1-6.fc41 updates 200.9 KiB dwz x86_64 0.15-8.fc41 fedora 298.9 KiB ed x86_64 1.20.2-2.fc41 fedora 146.9 KiB efi-srpm-macros noarch 5-13.fc41 updates 40.2 KiB elfutils x86_64 0.193-2.fc41 updates 3.0 MiB elfutils-debuginfod-client x86_64 0.193-2.fc41 updates 84.1 KiB elfutils-default-yama-scope noarch 0.193-2.fc41 updates 1.8 KiB elfutils-libelf x86_64 0.193-2.fc41 updates 1.2 MiB elfutils-libs x86_64 0.193-2.fc41 updates 686.6 KiB fedora-gpg-keys noarch 41-3 updates 128.2 KiB fedora-release noarch 41-33 updates 0.0 B fedora-release-identity-basic noarch 41-33 updates 654.0 B fedora-repos noarch 41-3 updates 4.9 KiB file x86_64 5.45-7.fc41 fedora 103.5 KiB file-libs x86_64 5.45-7.fc41 fedora 9.9 MiB filesystem x86_64 3.18-23.fc41 fedora 106.0 B fonts-srpm-macros noarch 1:2.0.5-17.fc41 fedora 55.8 KiB forge-srpm-macros noarch 0.4.0-1.fc41 updates 38.9 KiB fpc-srpm-macros noarch 1.3-13.fc41 fedora 144.0 B gdb-minimal x86_64 16.3-1.fc41 updates 13.3 MiB gdbm x86_64 1:1.23-7.fc41 fedora 460.9 KiB gdbm-libs x86_64 1:1.23-7.fc41 fedora 121.9 KiB ghc-srpm-macros noarch 1.9.1-2.fc41 fedora 747.0 B glibc x86_64 2.40-28.fc41 updates 6.7 MiB glibc-common x86_64 2.40-28.fc41 updates 1.0 MiB glibc-gconv-extra x86_64 2.40-28.fc41 updates 8.0 MiB gmp x86_64 1:6.3.0-2.fc41 fedora 811.4 KiB gnat-srpm-macros noarch 6-6.fc41 fedora 1.0 KiB go-srpm-macros noarch 3.8.0-1.fc41 updates 61.9 KiB jansson x86_64 2.13.1-10.fc41 fedora 88.3 KiB json-c x86_64 0.17-4.fc41 fedora 82.4 KiB kernel-srpm-macros noarch 1.0-24.fc41 fedora 1.9 KiB keyutils-libs x86_64 1.6.3-4.fc41 fedora 54.4 KiB krb5-libs x86_64 1.21.3-5.fc41 updates 2.3 MiB libacl x86_64 2.3.2-2.fc41 fedora 40.0 KiB libarchive x86_64 3.7.4-4.fc41 updates 926.6 KiB libattr x86_64 2.5.2-4.fc41 fedora 28.5 KiB libblkid x86_64 2.40.4-1.fc41 updates 257.2 KiB libbrotli x86_64 1.1.0-5.fc41 fedora 837.6 KiB libcap x86_64 2.70-4.fc41 fedora 220.2 KiB libcap-ng x86_64 0.8.5-3.fc41 fedora 69.2 KiB libcom_err x86_64 1.47.1-6.fc41 fedora 67.2 KiB libcurl x86_64 8.9.1-4.fc41 updates 822.1 KiB libeconf x86_64 0.6.2-3.fc41 fedora 58.0 KiB libevent x86_64 2.1.12-14.fc41 fedora 895.7 KiB libfdisk x86_64 2.40.4-1.fc41 updates 356.4 KiB libffi x86_64 3.4.6-3.fc41 fedora 86.4 KiB libgcc x86_64 14.3.1-3.fc41 updates 274.6 KiB libgomp x86_64 14.3.1-3.fc41 updates 523.6 KiB libidn2 x86_64 2.3.8-1.fc41 updates 556.6 KiB libmount x86_64 2.40.4-1.fc41 updates 348.8 KiB libnghttp2 x86_64 1.62.1-3.fc41 updates 174.1 KiB libnsl2 x86_64 2.0.1-2.fc41 fedora 57.9 KiB libpkgconf x86_64 2.3.0-1.fc41 fedora 78.2 KiB libpsl x86_64 0.21.5-4.fc41 fedora 80.5 KiB libpwquality x86_64 1.4.5-11.fc41 fedora 417.8 KiB libselinux x86_64 3.7-5.fc41 fedora 181.0 KiB libsemanage x86_64 3.7-2.fc41 fedora 293.5 KiB libsepol x86_64 3.7-2.fc41 fedora 817.8 KiB libsmartcols x86_64 2.40.4-1.fc41 updates 176.2 KiB libssh x86_64 0.11.3-1.fc41 updates 571.2 KiB libssh-config noarch 0.11.3-1.fc41 updates 277.0 B libstdc++ x86_64 14.3.1-3.fc41 updates 2.8 MiB libtasn1 x86_64 4.20.0-1.fc41 updates 180.4 KiB libtirpc x86_64 1.3.6-1.rc3.fc41 updates 197.6 KiB libtool-ltdl x86_64 2.4.7-12.fc41 fedora 66.2 KiB libunistring x86_64 1.1-8.fc41 fedora 1.7 MiB libutempter x86_64 1.2.1-15.fc41 fedora 57.7 KiB libuuid x86_64 2.40.4-1.fc41 updates 39.9 KiB libverto x86_64 0.3.2-9.fc41 fedora 29.5 KiB libxcrypt x86_64 4.4.38-7.fc41 updates 288.5 KiB libxml2 x86_64 2.12.10-1.fc41 updates 1.7 MiB libzstd x86_64 1.5.7-1.fc41 updates 804.0 KiB lua-libs x86_64 5.4.8-1.fc41 updates 285.0 KiB lua-srpm-macros noarch 1-14.fc41 fedora 1.3 KiB lz4-libs x86_64 1.10.0-1.fc41 fedora 145.5 KiB mpfr x86_64 4.2.1-5.fc41 fedora 832.1 KiB ncurses-base noarch 6.5-2.20240629.fc41 fedora 326.3 KiB ncurses-libs x86_64 6.5-2.20240629.fc41 fedora 975.2 KiB ocaml-srpm-macros noarch 10-3.fc41 fedora 1.9 KiB openblas-srpm-macros noarch 2-18.fc41 fedora 112.0 B openldap x86_64 2.6.10-1.fc41 updates 646.0 KiB openssl-libs x86_64 1:3.2.4-2.fc41 updates 7.8 MiB p11-kit x86_64 0.25.5-4.fc41 updates 2.2 MiB p11-kit-trust x86_64 0.25.5-4.fc41 updates 395.4 KiB package-notes-srpm-macros noarch 0.5-12.fc41 fedora 1.6 KiB pam x86_64 1.6.1-8.fc41 updates 1.8 MiB pam-libs x86_64 1.6.1-8.fc41 updates 139.0 KiB pcre2 x86_64 10.44-1.fc41.1 fedora 653.5 KiB pcre2-syntax noarch 10.44-1.fc41.1 fedora 251.6 KiB perl-srpm-macros noarch 1-56.fc41 fedora 861.0 B pkgconf x86_64 2.3.0-1.fc41 fedora 88.6 KiB pkgconf-m4 noarch 2.3.0-1.fc41 fedora 14.4 KiB pkgconf-pkg-config x86_64 2.3.0-1.fc41 fedora 989.0 B popt x86_64 1.19-7.fc41 fedora 136.9 KiB publicsuffix-list-dafsa noarch 20250616-1.fc41 updates 69.1 KiB pyproject-srpm-macros noarch 1.18.4-1.fc41 updates 1.9 KiB python-srpm-macros noarch 3.13-5.fc41 updates 51.0 KiB qt5-srpm-macros noarch 5.15.17-1.fc41 updates 500.0 B qt6-srpm-macros noarch 6.8.3-1.fc41 updates 456.0 B readline x86_64 8.2-10.fc41 fedora 493.2 KiB rpm x86_64 4.20.1-1.fc41 updates 3.1 MiB rpm-build-libs x86_64 4.20.1-1.fc41 updates 210.7 KiB rpm-libs x86_64 4.20.1-1.fc41 updates 730.0 KiB rpm-sequoia x86_64 1.7.0-5.fc41 updates 2.4 MiB rust-srpm-macros noarch 26.4-1.fc41 updates 4.8 KiB setup noarch 2.15.0-8.fc41 updates 720.7 KiB sqlite-libs x86_64 3.46.1-5.fc41 updates 1.5 MiB systemd-libs x86_64 256.17-1.fc41 updates 2.0 MiB util-linux-core x86_64 2.40.4-1.fc41 updates 1.5 MiB xxhash-libs x86_64 0.8.3-1.fc41 updates 88.5 KiB xz-libs x86_64 1:5.8.1-2.fc41 updates 217.9 KiB zig-srpm-macros noarch 1-3.fc41 fedora 1.1 KiB zip x86_64 3.0-41.fc41 fedora 703.2 KiB zlib-ng-compat x86_64 2.2.3-2.fc41 updates 141.9 KiB zstd x86_64 1.5.7-1.fc41 updates 1.7 MiB Installing groups: Buildsystem building group Transaction Summary: Installing: 154 packages Total size of inbound packages is 53 MiB. Need to download 53 MiB. After this operation, 182 MiB extra will be used (install 182 MiB, remove 0 B). [ 1/154] bzip2-0:1.0.8-19.fc41.x86_64 100% | 3.9 MiB/s | 52.5 KiB | 00m00s [ 2/154] cpio-0:2.15-2.fc41.x86_64 100% | 15.0 MiB/s | 291.8 KiB | 00m00s [ 3/154] diffutils-0:3.10-8.fc41.x86_6 100% | 39.6 MiB/s | 405.4 KiB | 00m00s [ 4/154] bash-0:5.2.32-1.fc41.x86_64 100% | 66.9 MiB/s | 1.8 MiB | 00m00s [ 5/154] findutils-1:4.10.0-4.fc41.x86 100% | 67.0 MiB/s | 548.5 KiB | 00m00s [ 6/154] grep-0:3.11-9.fc41.x86_64 100% | 73.2 MiB/s | 299.7 KiB | 00m00s [ 7/154] gzip-0:1.13-2.fc41.x86_64 100% | 55.4 MiB/s | 170.2 KiB | 00m00s [ 8/154] patch-0:2.7.6-25.fc41.x86_64 100% | 42.6 MiB/s | 131.0 KiB | 00m00s [ 9/154] sed-0:4.9-3.fc41.x86_64 100% | 103.4 MiB/s | 317.7 KiB | 00m00s [ 10/154] tar-2:1.35-4.fc41.x86_64 100% | 140.1 MiB/s | 860.7 KiB | 00m00s [ 11/154] shadow-utils-2:4.15.1-12.fc41 100% | 146.6 MiB/s | 1.3 MiB | 00m00s [ 12/154] unzip-0:6.0-64.fc41.x86_64 100% | 22.6 MiB/s | 184.9 KiB | 00m00s [ 13/154] which-0:2.21-42.fc41.x86_64 100% | 13.5 MiB/s | 41.6 KiB | 00m00s [ 14/154] fedora-release-common-0:41-33 100% | 5.7 MiB/s | 23.2 KiB | 00m00s [ 15/154] coreutils-0:9.5-12.fc41.x86_6 100% | 120.8 MiB/s | 1.1 MiB | 00m00s [ 16/154] gawk-0:5.3.0-4.fc41.x86_64 100% | 107.1 MiB/s | 1.1 MiB | 00m00s [ 17/154] glibc-minimal-langpack-0:2.40 100% | 9.9 MiB/s | 70.9 KiB | 00m00s [ 18/154] info-0:7.1.1-1.fc41.x86_64 100% | 59.2 MiB/s | 182.0 KiB | 00m00s [ 19/154] redhat-rpm-config-0:294-1.fc4 100% | 25.7 MiB/s | 79.0 KiB | 00m00s [ 20/154] rpm-build-0:4.20.1-1.fc41.x86 100% | 26.5 MiB/s | 81.5 KiB | 00m00s [ 21/154] util-linux-0:2.40.4-1.fc41.x8 100% | 155.0 MiB/s | 1.1 MiB | 00m00s [ 22/154] xz-1:5.8.1-2.fc41.x86_64 100% | 93.2 MiB/s | 572.9 KiB | 00m00s [ 23/154] filesystem-0:3.18-23.fc41.x86 100% | 120.8 MiB/s | 1.1 MiB | 00m00s [ 24/154] bzip2-libs-0:1.0.8-19.fc41.x8 100% | 10.0 MiB/s | 41.1 KiB | 00m00s [ 25/154] ncurses-libs-0:6.5-2.20240629 100% | 54.4 MiB/s | 334.0 KiB | 00m00s [ 26/154] libselinux-0:3.7-5.fc41.x86_6 100% | 21.4 MiB/s | 87.8 KiB | 00m00s [ 27/154] pcre2-0:10.44-1.fc41.1.x86_64 100% | 59.4 MiB/s | 243.1 KiB | 00m00s [ 28/154] ed-0:1.20.2-2.fc41.x86_64 100% | 20.0 MiB/s | 81.8 KiB | 00m00s [ 29/154] libattr-0:2.5.2-4.fc41.x86_64 100% | 8.9 MiB/s | 18.2 KiB | 00m00s [ 30/154] libacl-0:2.3.2-2.fc41.x86_64 100% | 8.0 MiB/s | 24.5 KiB | 00m00s [ 31/154] libeconf-0:0.6.2-3.fc41.x86_6 100% | 15.7 MiB/s | 32.2 KiB | 00m00s [ 32/154] libsemanage-0:3.7-2.fc41.x86_ 100% | 56.8 MiB/s | 116.3 KiB | 00m00s [ 33/154] libcap-0:2.70-4.fc41.x86_64 100% | 21.2 MiB/s | 86.7 KiB | 00m00s [ 34/154] gmp-1:6.3.0-2.fc41.x86_64 100% | 62.1 MiB/s | 318.0 KiB | 00m00s [ 35/154] mpfr-0:4.2.1-5.fc41.x86_64 100% | 112.7 MiB/s | 346.3 KiB | 00m00s [ 36/154] readline-0:8.2-10.fc41.x86_64 100% | 52.0 MiB/s | 213.2 KiB | 00m00s [ 37/154] coreutils-common-0:9.5-12.fc4 100% | 160.9 MiB/s | 2.1 MiB | 00m00s [ 38/154] glibc-common-0:2.40-28.fc41.x 100% | 87.7 MiB/s | 359.4 KiB | 00m00s [ 39/154] ansible-srpm-macros-0:1-16.fc 100% | 5.1 MiB/s | 20.8 KiB | 00m00s [ 40/154] dwz-0:0.15-8.fc41.x86_64 100% | 67.8 MiB/s | 138.9 KiB | 00m00s [ 41/154] file-0:5.45-7.fc41.x86_64 100% | 24.0 MiB/s | 49.1 KiB | 00m00s [ 42/154] fonts-srpm-macros-1:2.0.5-17. 100% | 8.8 MiB/s | 27.0 KiB | 00m00s [ 43/154] fpc-srpm-macros-0:1.3-13.fc41 100% | 2.6 MiB/s | 8.0 KiB | 00m00s [ 44/154] ghc-srpm-macros-0:1.9.1-2.fc4 100% | 2.9 MiB/s | 9.1 KiB | 00m00s [ 45/154] gnat-srpm-macros-0:6-6.fc41.n 100% | 4.4 MiB/s | 9.0 KiB | 00m00s [ 46/154] kernel-srpm-macros-0:1.0-24.f 100% | 4.8 MiB/s | 9.9 KiB | 00m00s [ 47/154] lua-srpm-macros-0:1-14.fc41.n 100% | 4.3 MiB/s | 8.9 KiB | 00m00s [ 48/154] ocaml-srpm-macros-0:10-3.fc41 100% | 4.5 MiB/s | 9.2 KiB | 00m00s [ 49/154] openblas-srpm-macros-0:2-18.f 100% | 3.8 MiB/s | 7.7 KiB | 00m00s [ 50/154] perl-srpm-macros-0:1-56.fc41. 100% | 2.1 MiB/s | 8.5 KiB | 00m00s [ 51/154] package-notes-srpm-macros-0:0 100% | 2.4 MiB/s | 9.8 KiB | 00m00s [ 52/154] zig-srpm-macros-0:1-3.fc41.no 100% | 1.6 MiB/s | 8.1 KiB | 00m00s [ 53/154] rpm-0:4.20.1-1.fc41.x86_64 100% | 89.5 MiB/s | 549.6 KiB | 00m00s [ 54/154] zip-0:3.0-41.fc41.x86_64 100% | 36.9 MiB/s | 264.8 KiB | 00m00s [ 55/154] popt-0:1.19-7.fc41.x86_64 100% | 12.9 MiB/s | 65.9 KiB | 00m00s [ 56/154] util-linux-core-0:2.40.4-1.fc 100% | 80.1 MiB/s | 492.2 KiB | 00m00s [ 57/154] libcap-ng-0:0.8.5-3.fc41.x86_ 100% | 4.0 MiB/s | 32.6 KiB | 00m00s [ 58/154] libutempter-0:1.2.1-15.fc41.x 100% | 3.7 MiB/s | 26.6 KiB | 00m00s [ 59/154] xz-libs-1:5.8.1-2.fc41.x86_64 100% | 36.6 MiB/s | 112.4 KiB | 00m00s [ 60/154] ncurses-base-0:6.5-2.20240629 100% | 43.1 MiB/s | 88.3 KiB | 00m00s [ 61/154] libsepol-0:3.7-2.fc41.x86_64 100% | 111.4 MiB/s | 342.2 KiB | 00m00s [ 62/154] pcre2-syntax-0:10.44-1.fc41.1 100% | 48.8 MiB/s | 149.9 KiB | 00m00s [ 63/154] file-libs-0:5.45-7.fc41.x86_6 100% | 148.8 MiB/s | 762.0 KiB | 00m00s [ 64/154] basesystem-0:11-21.fc41.noarc 100% | 2.4 MiB/s | 7.4 KiB | 00m00s [ 65/154] glibc-gconv-extra-0:2.40-28.f 100% | 149.4 MiB/s | 1.6 MiB | 00m00s [ 66/154] glibc-0:2.40-28.fc41.x86_64 100% | 146.2 MiB/s | 2.2 MiB | 00m00s [ 67/154] rpm-libs-0:4.20.1-1.fc41.x86_ 100% | 38.1 MiB/s | 312.0 KiB | 00m00s [ 68/154] rpm-build-libs-0:4.20.1-1.fc4 100% | 19.3 MiB/s | 98.8 KiB | 00m00s [ 69/154] audit-libs-0:4.1.1-1.fc41.x86 100% | 45.3 MiB/s | 139.1 KiB | 00m00s [ 70/154] libxcrypt-0:4.4.38-7.fc41.x86 100% | 62.3 MiB/s | 127.5 KiB | 00m00s [ 71/154] pam-libs-0:1.6.1-8.fc41.x86_6 100% | 27.6 MiB/s | 56.5 KiB | 00m00s [ 72/154] setup-0:2.15.0-8.fc41.noarch 100% | 37.8 MiB/s | 154.6 KiB | 00m00s [ 73/154] libgcc-0:14.3.1-3.fc41.x86_64 100% | 28.2 MiB/s | 144.3 KiB | 00m00s [ 74/154] forge-srpm-macros-0:0.4.0-1.f 100% | 4.8 MiB/s | 19.7 KiB | 00m00s [ 75/154] zlib-ng-compat-0:2.2.3-2.fc41 100% | 19.3 MiB/s | 78.9 KiB | 00m00s [ 76/154] elfutils-libs-0:0.193-2.fc41. 100% | 65.4 MiB/s | 268.0 KiB | 00m00s [ 77/154] elfutils-libelf-0:0.193-2.fc4 100% | 40.5 MiB/s | 207.5 KiB | 00m00s [ 78/154] elfutils-0:0.193-2.fc41.x86_6 100% | 110.7 MiB/s | 566.7 KiB | 00m00s [ 79/154] json-c-0:0.17-4.fc41.x86_64 100% | 10.7 MiB/s | 44.0 KiB | 00m00s [ 80/154] elfutils-debuginfod-client-0: 100% | 7.6 MiB/s | 47.0 KiB | 00m00s [ 81/154] libblkid-0:2.40.4-1.fc41.x86_ 100% | 29.1 MiB/s | 119.2 KiB | 00m00s [ 82/154] libuuid-0:2.40.4-1.fc41.x86_6 100% | 13.2 MiB/s | 27.1 KiB | 00m00s [ 83/154] libsmartcols-0:2.40.4-1.fc41. 100% | 25.8 MiB/s | 79.4 KiB | 00m00s [ 84/154] libmount-0:2.40.4-1.fc41.x86_ 100% | 36.4 MiB/s | 149.0 KiB | 00m00s [ 85/154] systemd-libs-0:256.17-1.fc41. 100% | 89.0 MiB/s | 728.9 KiB | 00m00s [ 86/154] libfdisk-0:2.40.4-1.fc41.x86_ 100% | 29.8 MiB/s | 152.5 KiB | 00m00s [ 87/154] pam-0:1.6.1-8.fc41.x86_64 100% | 90.3 MiB/s | 554.5 KiB | 00m00s [ 88/154] authselect-0:1.5.0-8.fc41.x86 100% | 35.6 MiB/s | 145.8 KiB | 00m00s [ 89/154] libnsl2-0:2.0.1-2.fc41.x86_64 100% | 14.5 MiB/s | 29.6 KiB | 00m00s [ 90/154] gdbm-libs-1:1.23-7.fc41.x86_6 100% | 13.7 MiB/s | 56.3 KiB | 00m00s [ 91/154] libpwquality-0:1.4.5-11.fc41. 100% | 29.1 MiB/s | 119.0 KiB | 00m00s [ 92/154] authselect-libs-0:1.5.0-8.fc4 100% | 53.2 MiB/s | 218.0 KiB | 00m00s [ 93/154] cracklib-0:2.9.11-6.fc41.x86_ 100% | 22.5 MiB/s | 92.1 KiB | 00m00s [ 94/154] libzstd-0:1.5.7-1.fc41.x86_64 100% | 102.7 MiB/s | 315.4 KiB | 00m00s [ 95/154] lua-libs-0:5.4.8-1.fc41.x86_6 100% | 25.7 MiB/s | 131.5 KiB | 00m00s [ 96/154] rpm-sequoia-0:1.7.0-5.fc41.x8 100% | 80.9 MiB/s | 911.4 KiB | 00m00s [ 97/154] libgomp-0:14.3.1-3.fc41.x86_6 100% | 51.0 MiB/s | 365.4 KiB | 00m00s [ 98/154] sqlite-libs-0:3.46.1-5.fc41.x 100% | 55.7 MiB/s | 741.4 KiB | 00m00s [ 99/154] jansson-0:2.13.1-10.fc41.x86_ 100% | 10.8 MiB/s | 44.4 KiB | 00m00s [100/154] debugedit-0:5.1-6.fc41.x86_64 100% | 19.4 MiB/s | 79.3 KiB | 00m00s [101/154] libarchive-0:3.7.4-4.fc41.x86 100% | 99.9 MiB/s | 409.1 KiB | 00m00s [102/154] lz4-libs-0:1.10.0-1.fc41.x86_ 100% | 23.0 MiB/s | 70.7 KiB | 00m00s [103/154] zstd-0:1.5.7-1.fc41.x86_64 100% | 78.7 MiB/s | 483.3 KiB | 00m00s [104/154] pkgconf-pkg-config-0:2.3.0-1. 100% | 2.0 MiB/s | 10.0 KiB | 00m00s [105/154] pkgconf-0:2.3.0-1.fc41.x86_64 100% | 14.7 MiB/s | 45.2 KiB | 00m00s [106/154] pkgconf-m4-0:2.3.0-1.fc41.noa 100% | 3.5 MiB/s | 14.3 KiB | 00m00s [107/154] binutils-0:2.43.1-8.fc41.x86_ 100% | 214.5 MiB/s | 6.4 MiB | 00m00s [108/154] libpkgconf-0:2.3.0-1.fc41.x86 100% | 4.7 MiB/s | 38.5 KiB | 00m00s [109/154] curl-0:8.9.1-4.fc41.x86_64 100% | 38.1 MiB/s | 311.8 KiB | 00m00s [110/154] build-reproducibility-srpm-ma 100% | 5.3 MiB/s | 10.8 KiB | 00m00s [111/154] add-determinism-0:0.3.6-3.fc4 100% | 142.6 MiB/s | 875.9 KiB | 00m00s [112/154] efi-srpm-macros-0:5-13.fc41.n 100% | 2.7 MiB/s | 22.5 KiB | 00m00s [113/154] go-srpm-macros-0:3.8.0-1.fc41 100% | 4.6 MiB/s | 28.3 KiB | 00m00s [114/154] pyproject-srpm-macros-0:1.18. 100% | 6.7 MiB/s | 13.7 KiB | 00m00s [115/154] python-srpm-macros-0:3.13-5.f 100% | 10.9 MiB/s | 22.4 KiB | 00m00s [116/154] qt5-srpm-macros-0:5.15.17-1.f 100% | 4.3 MiB/s | 8.7 KiB | 00m00s [117/154] qt6-srpm-macros-0:6.8.3-1.fc4 100% | 4.4 MiB/s | 9.1 KiB | 00m00s [118/154] rust-srpm-macros-0:26.4-1.fc4 100% | 5.4 MiB/s | 11.1 KiB | 00m00s [119/154] libtirpc-0:1.3.6-1.rc3.fc41.x 100% | 29.1 MiB/s | 89.4 KiB | 00m00s [120/154] libcom_err-0:1.47.1-6.fc41.x8 100% | 8.6 MiB/s | 26.6 KiB | 00m00s [121/154] gdbm-1:1.23-7.fc41.x86_64 100% | 49.4 MiB/s | 151.8 KiB | 00m00s [122/154] keyutils-libs-0:1.6.3-4.fc41. 100% | 15.5 MiB/s | 31.6 KiB | 00m00s [123/154] krb5-libs-0:1.21.3-5.fc41.x86 100% | 147.8 MiB/s | 756.9 KiB | 00m00s [124/154] libverto-0:0.3.2-9.fc41.x86_6 100% | 6.7 MiB/s | 20.7 KiB | 00m00s [125/154] libxml2-0:2.12.10-1.fc41.x86_ 100% | 111.9 MiB/s | 687.5 KiB | 00m00s [126/154] crypto-policies-0:20250707-1. 100% | 23.9 MiB/s | 97.8 KiB | 00m00s [127/154] fedora-repos-0:41-3.noarch 100% | 2.2 MiB/s | 9.1 KiB | 00m00s [128/154] ca-certificates-0:2024.2.69_v 100% | 121.5 MiB/s | 871.2 KiB | 00m00s [129/154] openssl-libs-1:3.2.4-2.fc41.x 100% | 121.5 MiB/s | 2.3 MiB | 00m00s [130/154] fedora-gpg-keys-0:41-3.noarch 100% | 12.0 MiB/s | 135.5 KiB | 00m00s [131/154] elfutils-default-yama-scope-0 100% | 1.2 MiB/s | 12.6 KiB | 00m00s [132/154] alternatives-0:1.31-1.fc41.x8 100% | 9.6 MiB/s | 39.4 KiB | 00m00s [133/154] libstdc++-0:14.3.1-3.fc41.x86 100% | 125.4 MiB/s | 898.9 KiB | 00m00s [134/154] p11-kit-0:0.25.5-4.fc41.x86_6 100% | 79.7 MiB/s | 489.6 KiB | 00m00s [135/154] libffi-0:3.4.6-3.fc41.x86_64 100% | 9.8 MiB/s | 39.9 KiB | 00m00s [136/154] p11-kit-trust-0:0.25.5-4.fc41 100% | 32.2 MiB/s | 131.8 KiB | 00m00s [137/154] libtasn1-0:4.20.0-1.fc41.x86_ 100% | 14.5 MiB/s | 74.4 KiB | 00m00s [138/154] fedora-release-0:41-33.noarch 100% | 2.0 MiB/s | 12.3 KiB | 00m00s [139/154] xxhash-libs-0:0.8.3-1.fc41.x8 100% | 5.0 MiB/s | 35.9 KiB | 00m00s [140/154] fedora-release-identity-basic 100% | 1.8 MiB/s | 13.1 KiB | 00m00s [141/154] libcurl-0:8.9.1-4.fc41.x86_64 100% | 70.3 MiB/s | 360.1 KiB | 00m00s [142/154] libssh-0:0.11.3-1.fc41.x86_64 100% | 45.4 MiB/s | 232.7 KiB | 00m00s [143/154] libbrotli-0:1.1.0-5.fc41.x86_ 100% | 47.5 MiB/s | 340.5 KiB | 00m00s [144/154] libpsl-0:0.21.5-4.fc41.x86_64 100% | 15.6 MiB/s | 64.1 KiB | 00m00s [145/154] gdb-minimal-0:16.3-1.fc41.x86 100% | 162.4 MiB/s | 4.4 MiB | 00m00s [146/154] libssh-config-0:0.11.3-1.fc41 100% | 1.8 MiB/s | 9.1 KiB | 00m00s [147/154] libunistring-0:1.1-8.fc41.x86 100% | 88.7 MiB/s | 544.8 KiB | 00m00s [148/154] libidn2-0:2.3.8-1.fc41.x86_64 100% | 42.7 MiB/s | 175.0 KiB | 00m00s [149/154] publicsuffix-list-dafsa-0:202 100% | 14.5 MiB/s | 59.2 KiB | 00m00s [150/154] libnghttp2-0:1.62.1-3.fc41.x8 100% | 14.9 MiB/s | 76.3 KiB | 00m00s [151/154] openldap-0:2.6.10-1.fc41.x86_ 100% | 49.9 MiB/s | 255.7 KiB | 00m00s [152/154] cyrus-sasl-lib-0:2.1.28-27.fc 100% | 110.9 MiB/s | 794.9 KiB | 00m00s [153/154] libevent-0:2.1.12-14.fc41.x86 100% | 35.9 MiB/s | 257.5 KiB | 00m00s [154/154] libtool-ltdl-0:2.4.7-12.fc41. 100% | 5.8 MiB/s | 35.6 KiB | 00m00s -------------------------------------------------------------------------------- [154/154] Total 100% | 164.8 MiB/s | 53.2 MiB | 00m00s Running transaction Importing OpenPGP key 0xE99D6AD1: UserID : "Fedora (41) " Fingerprint: 466CF2D8B60BC3057AA9453ED0622462E99D6AD1 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-41-primary The key was successfully imported. [ 1/156] Verify package files 100% | 890.0 B/s | 154.0 B | 00m00s [ 2/156] Prepare transaction 100% | 4.3 KiB/s | 154.0 B | 00m00s [ 3/156] Installing libgcc-0:14.3.1-3. 100% | 269.8 MiB/s | 276.3 KiB | 00m00s [ 4/156] Installing publicsuffix-list- 100% | 0.0 B/s | 69.8 KiB | 00m00s [ 5/156] Installing libssh-config-0:0. 100% | 0.0 B/s | 816.0 B | 00m00s [ 6/156] Installing fedora-release-ide 100% | 0.0 B/s | 912.0 B | 00m00s [ 7/156] Installing fedora-gpg-keys-0: 100% | 42.7 MiB/s | 174.8 KiB | 00m00s [ 8/156] Installing fedora-repos-0:41- 100% | 0.0 B/s | 5.7 KiB | 00m00s [ 9/156] Installing fedora-release-com 100% | 23.5 MiB/s | 24.0 KiB | 00m00s [ 10/156] Installing fedora-release-0:4 100% | 0.0 B/s | 124.0 B | 00m00s [ 11/156] Installing setup-0:2.15.0-8.f 100% | 59.1 MiB/s | 726.5 KiB | 00m00s >>> [RPM] /etc/hosts created as /etc/hosts.rpmnew [ 12/156] Installing filesystem-0:3.18- 100% | 3.8 MiB/s | 212.5 KiB | 00m00s [ 13/156] Installing basesystem-0:11-21 100% | 0.0 B/s | 124.0 B | 00m00s [ 14/156] Installing rust-srpm-macros-0 100% | 0.0 B/s | 5.6 KiB | 00m00s [ 15/156] Installing qt6-srpm-macros-0: 100% | 0.0 B/s | 732.0 B | 00m00s [ 16/156] Installing qt5-srpm-macros-0: 100% | 0.0 B/s | 776.0 B | 00m00s [ 17/156] Installing pkgconf-m4-0:2.3.0 100% | 0.0 B/s | 14.8 KiB | 00m00s [ 18/156] Installing pcre2-syntax-0:10. 100% | 248.1 MiB/s | 254.1 KiB | 00m00s [ 19/156] Installing ncurses-base-0:6.5 100% | 85.9 MiB/s | 351.7 KiB | 00m00s [ 20/156] Installing glibc-minimal-lang 100% | 0.0 B/s | 124.0 B | 00m00s [ 21/156] Installing ncurses-libs-0:6.5 100% | 239.7 MiB/s | 981.8 KiB | 00m00s [ 22/156] Installing glibc-0:2.40-28.fc 100% | 318.7 MiB/s | 6.7 MiB | 00m00s [ 23/156] Installing bash-0:5.2.32-1.fc 100% | 453.8 MiB/s | 8.2 MiB | 00m00s [ 24/156] Installing glibc-common-0:2.4 100% | 210.2 MiB/s | 1.1 MiB | 00m00s [ 25/156] Installing glibc-gconv-extra- 100% | 297.9 MiB/s | 8.0 MiB | 00m00s [ 26/156] Installing zlib-ng-compat-0:2 100% | 139.4 MiB/s | 142.7 KiB | 00m00s [ 27/156] Installing bzip2-libs-0:1.0.8 100% | 0.0 B/s | 81.8 KiB | 00m00s [ 28/156] Installing xz-libs-1:5.8.1-2. 100% | 213.9 MiB/s | 219.0 KiB | 00m00s [ 29/156] Installing readline-0:8.2-10. 100% | 241.8 MiB/s | 495.3 KiB | 00m00s [ 30/156] Installing popt-0:1.19-7.fc41 100% | 70.1 MiB/s | 143.5 KiB | 00m00s [ 31/156] Installing libuuid-0:2.40.4-1 100% | 0.0 B/s | 41.0 KiB | 00m00s [ 32/156] Installing libblkid-0:2.40.4- 100% | 252.1 MiB/s | 258.2 KiB | 00m00s [ 33/156] Installing libattr-0:2.5.2-4. 100% | 0.0 B/s | 29.5 KiB | 00m00s [ 34/156] Installing libacl-0:2.3.2-2.f 100% | 0.0 B/s | 40.7 KiB | 00m00s [ 35/156] Installing gmp-1:6.3.0-2.fc41 100% | 397.3 MiB/s | 813.7 KiB | 00m00s [ 36/156] Installing libxcrypt-0:4.4.38 100% | 284.4 MiB/s | 291.2 KiB | 00m00s [ 37/156] Installing libzstd-0:1.5.7-1. 100% | 393.1 MiB/s | 805.1 KiB | 00m00s [ 38/156] Installing elfutils-libelf-0: 100% | 390.1 MiB/s | 1.2 MiB | 00m00s [ 39/156] Installing libstdc++-0:14.3.1 100% | 395.9 MiB/s | 2.8 MiB | 00m00s [ 40/156] Installing libeconf-0:0.6.2-3 100% | 0.0 B/s | 59.7 KiB | 00m00s [ 41/156] Installing gdbm-libs-1:1.23-7 100% | 120.7 MiB/s | 123.6 KiB | 00m00s [ 42/156] Installing dwz-0:0.15-8.fc41. 100% | 293.3 MiB/s | 300.3 KiB | 00m00s [ 43/156] Installing mpfr-0:4.2.1-5.fc4 100% | 271.4 MiB/s | 833.7 KiB | 00m00s [ 44/156] Installing gawk-0:5.3.0-4.fc4 100% | 288.7 MiB/s | 1.7 MiB | 00m00s [ 45/156] Installing unzip-0:6.0-64.fc4 100% | 190.6 MiB/s | 390.3 KiB | 00m00s [ 46/156] Installing file-libs-0:5.45-7 100% | 709.6 MiB/s | 9.9 MiB | 00m00s [ 47/156] Installing file-0:5.45-7.fc41 100% | 14.7 MiB/s | 105.0 KiB | 00m00s [ 48/156] Installing crypto-policies-0: 100% | 41.5 MiB/s | 170.1 KiB | 00m00s [ 49/156] Installing pcre2-0:10.44-1.fc 100% | 319.8 MiB/s | 654.9 KiB | 00m00s [ 50/156] Installing grep-0:3.11-9.fc41 100% | 250.8 MiB/s | 1.0 MiB | 00m00s [ 51/156] Installing xz-1:5.8.1-2.fc41. 100% | 267.1 MiB/s | 1.3 MiB | 00m00s [ 52/156] Installing libcap-ng-0:0.8.5- 100% | 69.4 MiB/s | 71.0 KiB | 00m00s [ 53/156] Installing audit-libs-0:4.1.1 100% | 380.7 MiB/s | 389.8 KiB | 00m00s [ 54/156] Installing pam-libs-0:1.6.1-8 100% | 138.1 MiB/s | 141.4 KiB | 00m00s [ 55/156] Installing libcap-0:2.70-4.fc 100% | 220.0 MiB/s | 225.2 KiB | 00m00s [ 56/156] Installing systemd-libs-0:256 100% | 338.4 MiB/s | 2.0 MiB | 00m00s [ 57/156] Installing libsepol-0:3.7-2.f 100% | 399.8 MiB/s | 818.8 KiB | 00m00s [ 58/156] Installing libselinux-0:3.7-5 100% | 178.0 MiB/s | 182.3 KiB | 00m00s [ 59/156] Installing sed-0:4.9-3.fc41.x 100% | 283.1 MiB/s | 869.7 KiB | 00m00s [ 60/156] Installing findutils-1:4.10.0 100% | 371.6 MiB/s | 1.9 MiB | 00m00s [ 61/156] Installing libmount-0:2.40.4- 100% | 341.6 MiB/s | 349.8 KiB | 00m00s [ 62/156] Installing libsmartcols-0:2.4 100% | 173.2 MiB/s | 177.4 KiB | 00m00s [ 63/156] Installing lua-libs-0:5.4.8-1 100% | 279.5 MiB/s | 286.2 KiB | 00m00s [ 64/156] Installing lz4-libs-0:1.10.0- 100% | 143.1 MiB/s | 146.6 KiB | 00m00s [ 65/156] Installing libcom_err-0:1.47. 100% | 0.0 B/s | 68.3 KiB | 00m00s [ 66/156] Installing alternatives-0:1.3 100% | 0.0 B/s | 66.4 KiB | 00m00s [ 67/156] Installing libffi-0:3.4.6-3.f 100% | 85.7 MiB/s | 87.8 KiB | 00m00s [ 68/156] Installing libtasn1-0:4.20.0- 100% | 177.9 MiB/s | 182.2 KiB | 00m00s [ 69/156] Installing p11-kit-0:0.25.5-4 100% | 276.4 MiB/s | 2.2 MiB | 00m00s [ 70/156] Installing libunistring-0:1.1 100% | 346.1 MiB/s | 1.7 MiB | 00m00s [ 71/156] Installing libidn2-0:2.3.8-1. 100% | 183.2 MiB/s | 562.8 KiB | 00m00s [ 72/156] Installing libpsl-0:0.21.5-4. 100% | 79.7 MiB/s | 81.7 KiB | 00m00s [ 73/156] Installing p11-kit-trust-0:0. 100% | 48.5 MiB/s | 397.1 KiB | 00m00s [ 74/156] Installing zstd-0:1.5.7-1.fc4 100% | 342.0 MiB/s | 1.7 MiB | 00m00s [ 75/156] Installing util-linux-core-0: 100% | 245.8 MiB/s | 1.5 MiB | 00m00s [ 76/156] Installing tar-2:1.35-4.fc41. 100% | 369.8 MiB/s | 3.0 MiB | 00m00s [ 77/156] Installing libsemanage-0:3.7- 100% | 144.2 MiB/s | 295.2 KiB | 00m00s [ 78/156] Installing shadow-utils-2:4.1 100% | 173.6 MiB/s | 4.2 MiB | 00m00s [ 79/156] Installing libutempter-0:1.2. 100% | 58.3 MiB/s | 59.7 KiB | 00m00s [ 80/156] Installing zip-0:3.0-41.fc41. 100% | 230.2 MiB/s | 707.1 KiB | 00m00s [ 81/156] Installing gdbm-1:1.23-7.fc41 100% | 227.4 MiB/s | 465.8 KiB | 00m00s [ 82/156] Installing cyrus-sasl-lib-0:2 100% | 384.3 MiB/s | 2.3 MiB | 00m00s [ 83/156] Installing libfdisk-0:2.40.4- 100% | 349.0 MiB/s | 357.4 KiB | 00m00s [ 84/156] Installing libxml2-0:2.12.10- 100% | 344.1 MiB/s | 1.7 MiB | 00m00s [ 85/156] Installing bzip2-0:1.0.8-19.f 100% | 97.8 MiB/s | 100.2 KiB | 00m00s [ 86/156] Installing sqlite-libs-0:3.46 100% | 374.1 MiB/s | 1.5 MiB | 00m00s [ 87/156] Installing add-determinism-0: 100% | 392.6 MiB/s | 2.4 MiB | 00m00s [ 88/156] Installing build-reproducibil 100% | 0.0 B/s | 1.0 KiB | 00m00s [ 89/156] Installing ed-0:1.20.2-2.fc41 100% | 145.7 MiB/s | 149.2 KiB | 00m00s [ 90/156] Installing patch-0:2.7.6-25.f 100% | 261.9 MiB/s | 268.2 KiB | 00m00s [ 91/156] Installing elfutils-default-y 100% | 408.6 KiB/s | 2.0 KiB | 00m00s [ 92/156] Installing elfutils-libs-0:0. 100% | 336.2 MiB/s | 688.5 KiB | 00m00s [ 93/156] Installing cpio-0:2.15-2.fc41 100% | 274.9 MiB/s | 1.1 MiB | 00m00s [ 94/156] Installing diffutils-0:3.10-8 100% | 318.1 MiB/s | 1.6 MiB | 00m00s [ 95/156] Installing json-c-0:0.17-4.fc 100% | 81.7 MiB/s | 83.6 KiB | 00m00s [ 96/156] Installing libgomp-0:14.3.1-3 100% | 256.4 MiB/s | 525.0 KiB | 00m00s [ 97/156] Installing jansson-0:2.13.1-1 100% | 87.6 MiB/s | 89.7 KiB | 00m00s [ 98/156] Installing libpkgconf-0:2.3.0 100% | 0.0 B/s | 79.3 KiB | 00m00s [ 99/156] Installing pkgconf-0:2.3.0-1. 100% | 89.0 MiB/s | 91.1 KiB | 00m00s [100/156] Installing pkgconf-pkg-config 100% | 0.0 B/s | 1.8 KiB | 00m00s [101/156] Installing keyutils-libs-0:1. 100% | 54.5 MiB/s | 55.8 KiB | 00m00s [102/156] Installing libverto-0:0.3.2-9 100% | 0.0 B/s | 31.3 KiB | 00m00s [103/156] Installing xxhash-libs-0:0.8. 100% | 87.8 MiB/s | 89.9 KiB | 00m00s [104/156] Installing libbrotli-0:1.1.0- 100% | 273.4 MiB/s | 839.9 KiB | 00m00s [105/156] Installing libnghttp2-0:1.62. 100% | 171.2 MiB/s | 175.3 KiB | 00m00s [106/156] Installing libtool-ltdl-0:2.4 100% | 65.7 MiB/s | 67.3 KiB | 00m00s [107/156] Installing perl-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [108/156] Installing package-notes-srpm 100% | 0.0 B/s | 2.0 KiB | 00m00s [109/156] Installing openblas-srpm-macr 100% | 0.0 B/s | 392.0 B | 00m00s [110/156] Installing ocaml-srpm-macros- 100% | 0.0 B/s | 2.2 KiB | 00m00s [111/156] Installing kernel-srpm-macros 100% | 0.0 B/s | 2.3 KiB | 00m00s [112/156] Installing gnat-srpm-macros-0 100% | 0.0 B/s | 1.3 KiB | 00m00s [113/156] Installing ghc-srpm-macros-0: 100% | 0.0 B/s | 1.0 KiB | 00m00s [114/156] Installing fpc-srpm-macros-0: 100% | 0.0 B/s | 420.0 B | 00m00s [115/156] Installing ansible-srpm-macro 100% | 35.4 MiB/s | 36.2 KiB | 00m00s [116/156] Installing coreutils-common-0 100% | 399.6 MiB/s | 11.2 MiB | 00m00s [117/156] Installing openssl-libs-1:3.2 100% | 434.9 MiB/s | 7.8 MiB | 00m00s [118/156] Installing coreutils-0:9.5-12 100% | 263.9 MiB/s | 5.5 MiB | 00m00s [119/156] Installing ca-certificates-0: 100% | 4.0 MiB/s | 2.4 MiB | 00m01s [120/156] Installing krb5-libs-0:1.21.3 100% | 289.9 MiB/s | 2.3 MiB | 00m00s [121/156] Installing libarchive-0:3.7.4 100% | 302.3 MiB/s | 928.6 KiB | 00m00s [122/156] Installing libtirpc-0:1.3.6-1 100% | 194.7 MiB/s | 199.4 KiB | 00m00s [123/156] Installing gzip-0:1.13-2.fc41 100% | 192.7 MiB/s | 394.6 KiB | 00m00s [124/156] Installing authselect-libs-0: 100% | 204.4 MiB/s | 837.2 KiB | 00m00s [125/156] Installing cracklib-0:2.9.11- 100% | 81.5 MiB/s | 250.3 KiB | 00m00s [126/156] Installing libpwquality-0:1.4 100% | 140.0 MiB/s | 430.1 KiB | 00m00s [127/156] Installing libnsl2-0:2.0.1-2. 100% | 57.7 MiB/s | 59.1 KiB | 00m00s [128/156] Installing pam-0:1.6.1-8.fc41 100% | 185.9 MiB/s | 1.9 MiB | 00m00s [129/156] Installing libssh-0:0.11.3-1. 100% | 279.9 MiB/s | 573.3 KiB | 00m00s [130/156] Installing rpm-sequoia-0:1.7. 100% | 403.1 MiB/s | 2.4 MiB | 00m00s [131/156] Installing rpm-libs-0:4.20.1- 100% | 357.2 MiB/s | 731.5 KiB | 00m00s [132/156] Installing rpm-build-libs-0:4 100% | 206.6 MiB/s | 211.5 KiB | 00m00s [133/156] Installing libevent-0:2.1.12- 100% | 292.8 MiB/s | 899.5 KiB | 00m00s [134/156] Installing openldap-0:2.6.10- 100% | 317.3 MiB/s | 649.7 KiB | 00m00s [135/156] Installing libcurl-0:8.9.1-4. 100% | 402.0 MiB/s | 823.2 KiB | 00m00s [136/156] Installing elfutils-debuginfo 100% | 84.4 MiB/s | 86.5 KiB | 00m00s [137/156] Installing elfutils-0:0.193-2 100% | 424.1 MiB/s | 3.0 MiB | 00m00s [138/156] Installing binutils-0:2.43.1- 100% | 388.4 MiB/s | 27.6 MiB | 00m00s [139/156] Installing gdb-minimal-0:16.3 100% | 414.4 MiB/s | 13.3 MiB | 00m00s [140/156] Installing debugedit-0:5.1-6. 100% | 198.9 MiB/s | 203.6 KiB | 00m00s [141/156] Installing curl-0:8.9.1-4.fc4 100% | 78.0 MiB/s | 798.6 KiB | 00m00s [142/156] Installing rpm-0:4.20.1-1.fc4 100% | 193.3 MiB/s | 2.5 MiB | 00m00s [143/156] Installing lua-srpm-macros-0: 100% | 0.0 B/s | 1.9 KiB | 00m00s [144/156] Installing zig-srpm-macros-0: 100% | 0.0 B/s | 1.7 KiB | 00m00s [145/156] Installing efi-srpm-macros-0: 100% | 0.0 B/s | 41.2 KiB | 00m00s [146/156] Installing fonts-srpm-macros- 100% | 55.7 MiB/s | 57.0 KiB | 00m00s [147/156] Installing forge-srpm-macros- 100% | 0.0 B/s | 40.3 KiB | 00m00s [148/156] Installing go-srpm-macros-0:3 100% | 61.6 MiB/s | 63.0 KiB | 00m00s [149/156] Installing python-srpm-macros 100% | 50.9 MiB/s | 52.2 KiB | 00m00s [150/156] Installing redhat-rpm-config- 100% | 92.9 MiB/s | 190.2 KiB | 00m00s [151/156] Installing rpm-build-0:4.20.1 100% | 98.8 MiB/s | 202.3 KiB | 00m00s [152/156] Installing pyproject-srpm-mac 100% | 2.4 MiB/s | 2.5 KiB | 00m00s [153/156] Installing util-linux-0:2.40. 100% | 176.4 MiB/s | 3.7 MiB | 00m00s [154/156] Installing authselect-0:1.5.0 100% | 79.1 MiB/s | 161.9 KiB | 00m00s [155/156] Installing which-0:2.21-42.fc 100% | 80.5 MiB/s | 82.4 KiB | 00m00s [156/156] Installing info-0:7.1.1-1.fc4 100% | 454.4 KiB/s | 362.2 KiB | 00m01s Complete! Finish: installing minimal buildroot with dnf5 Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: INFO: add-determinism-0.3.6-3.fc41.x86_64 alternatives-1.31-1.fc41.x86_64 ansible-srpm-macros-1-16.fc41.noarch audit-libs-4.1.1-1.fc41.x86_64 authselect-1.5.0-8.fc41.x86_64 authselect-libs-1.5.0-8.fc41.x86_64 basesystem-11-21.fc41.noarch bash-5.2.32-1.fc41.x86_64 binutils-2.43.1-8.fc41.x86_64 build-reproducibility-srpm-macros-0.3.6-3.fc41.noarch bzip2-1.0.8-19.fc41.x86_64 bzip2-libs-1.0.8-19.fc41.x86_64 ca-certificates-2024.2.69_v8.0.401-1.0.fc41.noarch coreutils-9.5-12.fc41.x86_64 coreutils-common-9.5-12.fc41.x86_64 cpio-2.15-2.fc41.x86_64 cracklib-2.9.11-6.fc41.x86_64 crypto-policies-20250707-1.git836bbee.fc41.noarch curl-8.9.1-4.fc41.x86_64 cyrus-sasl-lib-2.1.28-27.fc41.x86_64 debugedit-5.1-6.fc41.x86_64 diffutils-3.10-8.fc41.x86_64 dwz-0.15-8.fc41.x86_64 ed-1.20.2-2.fc41.x86_64 efi-srpm-macros-5-13.fc41.noarch elfutils-0.193-2.fc41.x86_64 elfutils-debuginfod-client-0.193-2.fc41.x86_64 elfutils-default-yama-scope-0.193-2.fc41.noarch elfutils-libelf-0.193-2.fc41.x86_64 elfutils-libs-0.193-2.fc41.x86_64 fedora-gpg-keys-41-3.noarch fedora-release-41-33.noarch fedora-release-common-41-33.noarch fedora-release-identity-basic-41-33.noarch fedora-repos-41-3.noarch file-5.45-7.fc41.x86_64 file-libs-5.45-7.fc41.x86_64 filesystem-3.18-23.fc41.x86_64 findutils-4.10.0-4.fc41.x86_64 fonts-srpm-macros-2.0.5-17.fc41.noarch forge-srpm-macros-0.4.0-1.fc41.noarch fpc-srpm-macros-1.3-13.fc41.noarch gawk-5.3.0-4.fc41.x86_64 gdb-minimal-16.3-1.fc41.x86_64 gdbm-1.23-7.fc41.x86_64 gdbm-libs-1.23-7.fc41.x86_64 ghc-srpm-macros-1.9.1-2.fc41.noarch glibc-2.40-28.fc41.x86_64 glibc-common-2.40-28.fc41.x86_64 glibc-gconv-extra-2.40-28.fc41.x86_64 glibc-minimal-langpack-2.40-28.fc41.x86_64 gmp-6.3.0-2.fc41.x86_64 gnat-srpm-macros-6-6.fc41.noarch go-srpm-macros-3.8.0-1.fc41.noarch gpg-pubkey-e99d6ad1-64d2612c grep-3.11-9.fc41.x86_64 gzip-1.13-2.fc41.x86_64 info-7.1.1-1.fc41.x86_64 jansson-2.13.1-10.fc41.x86_64 json-c-0.17-4.fc41.x86_64 kernel-srpm-macros-1.0-24.fc41.noarch keyutils-libs-1.6.3-4.fc41.x86_64 krb5-libs-1.21.3-5.fc41.x86_64 libacl-2.3.2-2.fc41.x86_64 libarchive-3.7.4-4.fc41.x86_64 libattr-2.5.2-4.fc41.x86_64 libblkid-2.40.4-1.fc41.x86_64 libbrotli-1.1.0-5.fc41.x86_64 libcap-2.70-4.fc41.x86_64 libcap-ng-0.8.5-3.fc41.x86_64 libcom_err-1.47.1-6.fc41.x86_64 libcurl-8.9.1-4.fc41.x86_64 libeconf-0.6.2-3.fc41.x86_64 libevent-2.1.12-14.fc41.x86_64 libfdisk-2.40.4-1.fc41.x86_64 libffi-3.4.6-3.fc41.x86_64 libgcc-14.3.1-3.fc41.x86_64 libgomp-14.3.1-3.fc41.x86_64 libidn2-2.3.8-1.fc41.x86_64 libmount-2.40.4-1.fc41.x86_64 libnghttp2-1.62.1-3.fc41.x86_64 libnsl2-2.0.1-2.fc41.x86_64 libpkgconf-2.3.0-1.fc41.x86_64 libpsl-0.21.5-4.fc41.x86_64 libpwquality-1.4.5-11.fc41.x86_64 libselinux-3.7-5.fc41.x86_64 libsemanage-3.7-2.fc41.x86_64 libsepol-3.7-2.fc41.x86_64 libsmartcols-2.40.4-1.fc41.x86_64 libssh-0.11.3-1.fc41.x86_64 libssh-config-0.11.3-1.fc41.noarch libstdc++-14.3.1-3.fc41.x86_64 libtasn1-4.20.0-1.fc41.x86_64 libtirpc-1.3.6-1.rc3.fc41.x86_64 libtool-ltdl-2.4.7-12.fc41.x86_64 libunistring-1.1-8.fc41.x86_64 libutempter-1.2.1-15.fc41.x86_64 libuuid-2.40.4-1.fc41.x86_64 libverto-0.3.2-9.fc41.x86_64 libxcrypt-4.4.38-7.fc41.x86_64 libxml2-2.12.10-1.fc41.x86_64 libzstd-1.5.7-1.fc41.x86_64 lua-libs-5.4.8-1.fc41.x86_64 lua-srpm-macros-1-14.fc41.noarch lz4-libs-1.10.0-1.fc41.x86_64 mpfr-4.2.1-5.fc41.x86_64 ncurses-base-6.5-2.20240629.fc41.noarch ncurses-libs-6.5-2.20240629.fc41.x86_64 ocaml-srpm-macros-10-3.fc41.noarch openblas-srpm-macros-2-18.fc41.noarch openldap-2.6.10-1.fc41.x86_64 openssl-libs-3.2.4-2.fc41.x86_64 p11-kit-0.25.5-4.fc41.x86_64 p11-kit-trust-0.25.5-4.fc41.x86_64 package-notes-srpm-macros-0.5-12.fc41.noarch pam-1.6.1-8.fc41.x86_64 pam-libs-1.6.1-8.fc41.x86_64 patch-2.7.6-25.fc41.x86_64 pcre2-10.44-1.fc41.1.x86_64 pcre2-syntax-10.44-1.fc41.1.noarch perl-srpm-macros-1-56.fc41.noarch pkgconf-2.3.0-1.fc41.x86_64 pkgconf-m4-2.3.0-1.fc41.noarch pkgconf-pkg-config-2.3.0-1.fc41.x86_64 popt-1.19-7.fc41.x86_64 publicsuffix-list-dafsa-20250616-1.fc41.noarch pyproject-srpm-macros-1.18.4-1.fc41.noarch python-srpm-macros-3.13-5.fc41.noarch qt5-srpm-macros-5.15.17-1.fc41.noarch qt6-srpm-macros-6.8.3-1.fc41.noarch readline-8.2-10.fc41.x86_64 redhat-rpm-config-294-1.fc41.noarch rpm-4.20.1-1.fc41.x86_64 rpm-build-4.20.1-1.fc41.x86_64 rpm-build-libs-4.20.1-1.fc41.x86_64 rpm-libs-4.20.1-1.fc41.x86_64 rpm-sequoia-1.7.0-5.fc41.x86_64 rust-srpm-macros-26.4-1.fc41.noarch sed-4.9-3.fc41.x86_64 setup-2.15.0-8.fc41.noarch shadow-utils-4.15.1-12.fc41.x86_64 sqlite-libs-3.46.1-5.fc41.x86_64 systemd-libs-256.17-1.fc41.x86_64 tar-1.35-4.fc41.x86_64 unzip-6.0-64.fc41.x86_64 util-linux-2.40.4-1.fc41.x86_64 util-linux-core-2.40.4-1.fc41.x86_64 which-2.21-42.fc41.x86_64 xxhash-libs-0.8.3-1.fc41.x86_64 xz-5.8.1-2.fc41.x86_64 xz-libs-5.8.1-2.fc41.x86_64 zig-srpm-macros-1-3.fc41.noarch zip-3.0-41.fc41.x86_64 zlib-ng-compat-2.2.3-2.fc41.x86_64 zstd-1.5.7-1.fc41.x86_64 Start: buildsrpm Start: rpmbuild -bs Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc41.src.rpm Finish: rpmbuild -bs INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-41-x86_64-1759552642.334703/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names Finish: buildsrpm INFO: Done(/var/lib/copr-rpmbuild/workspace/workdir-dy0jv4eu/ollama/ollama.spec) Config(child) 0 minutes 17 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot INFO: Start(/var/lib/copr-rpmbuild/results/ollama-0.12.3-1.fc41.src.rpm) Config(fedora-41-x86_64) Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-41-x86_64-bootstrap-1759552642.334703/root. INFO: reusing tmpfs at /var/lib/mock/fedora-41-x86_64-bootstrap-1759552642.334703/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-41-x86_64-1759552642.334703/root. INFO: calling preinit hooks INFO: enabled root cache Start: unpacking root cache Finish: unpacking root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc41.x86_64 rpm-sequoia-1.7.0-5.fc41.x86_64 dnf5-5.2.16.0-1.fc41.x86_64 dnf5-plugins-5.2.16.0-1.fc41.x86_64 Finish: chroot init Start: build phase for ollama-0.12.3-1.fc41.src.rpm Start: build setup for ollama-0.12.3-1.fc41.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc41.src.rpm Updating and loading repositories: Additional repo https_developer_downlo 100% | 18.8 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 18.7 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 7.2 KiB/s | 1.5 KiB | 00m00s fedora 100% | 80.6 KiB/s | 30.9 KiB | 00m00s updates 100% | 19.2 KiB/s | 6.2 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: cmake x86_64 3.30.8-1.fc41 updates 32.7 MiB gcc-c++ x86_64 14.3.1-3.fc41 updates 38.2 MiB go-rpm-macros x86_64 3.8.0-1.fc41 updates 96.6 KiB go-vendor-tools noarch 0.8.0-1.fc41 updates 318.9 KiB systemd-rpm-macros noarch 256.17-1.fc41 updates 10.7 KiB Installing dependencies: annobin-docs noarch 12.69-1.fc41 fedora 97.7 KiB annobin-plugin-gcc x86_64 12.69-1.fc41 fedora 985.0 KiB cmake-data noarch 3.30.8-1.fc41 updates 8.2 MiB cmake-filesystem x86_64 3.30.8-1.fc41 updates 0.0 B cmake-rpm-macros noarch 3.30.8-1.fc41 updates 7.7 KiB cpp x86_64 14.3.1-3.fc41 updates 35.0 MiB emacs-filesystem noarch 1:30.0-3.fc41 fedora 0.0 B expat x86_64 2.7.2-1.fc41 updates 306.8 KiB gcc x86_64 14.3.1-3.fc41 updates 104.3 MiB gcc-plugin-annobin x86_64 14.3.1-3.fc41 updates 61.1 KiB glibc-devel x86_64 2.40-28.fc41 updates 2.3 MiB go-filesystem x86_64 3.8.0-1.fc41 updates 0.0 B golang x86_64 1.24.7-1.fc41 updates 8.9 MiB golang-bin x86_64 1.24.7-1.fc41 updates 121.6 MiB golang-src noarch 1.24.7-1.fc41 updates 79.2 MiB golist x86_64 0.10.4-5.fc41 fedora 4.2 MiB jsoncpp x86_64 1.9.5-8.fc41 fedora 253.4 KiB kernel-headers x86_64 6.16.2-100.fc41 updates 6.7 MiB libb2 x86_64 0.98.1-12.fc41 fedora 42.2 KiB libmpc x86_64 1.3.1-6.fc41 fedora 164.7 KiB libstdc++-devel x86_64 14.3.1-3.fc41 updates 15.4 MiB libuv x86_64 1:1.51.0-1.fc41 updates 571.7 KiB libxcrypt-devel x86_64 4.4.38-7.fc41 updates 30.8 KiB make x86_64 1:4.4.1-8.fc41 fedora 1.8 MiB mpdecimal x86_64 2.5.1-16.fc41 fedora 204.9 KiB python-pip-wheel noarch 24.2-3.fc41 updates 1.2 MiB python3 x86_64 3.13.7-1.fc41 updates 32.8 KiB python3-boolean.py noarch 4.0-8.fc41 fedora 522.5 KiB python3-libs x86_64 3.13.7-1.fc41 updates 40.6 MiB python3-license-expression noarch 30.4.1-2.fc41 updates 1.1 MiB python3-zstarfile noarch 0.2.0-3.fc41 fedora 23.8 KiB rhash x86_64 1.4.4-2.fc41 fedora 349.9 KiB tzdata noarch 2025b-1.fc41 updates 1.6 MiB vim-filesystem noarch 2:9.1.1775-1.fc41 updates 40.0 B Transaction Summary: Installing: 39 packages Total size of inbound packages is 140 MiB. Need to download 140 MiB. After this operation, 507 MiB extra will be used (install 507 MiB, remove 0 B). [ 1/39] go-rpm-macros-0:3.8.0-1.fc41.x8 100% | 2.5 MiB/s | 38.3 KiB | 00m00s [ 2/39] go-vendor-tools-0:0.8.0-1.fc41. 100% | 12.3 MiB/s | 125.5 KiB | 00m00s [ 3/39] systemd-rpm-macros-0:256.17-1.f 100% | 6.0 MiB/s | 30.7 KiB | 00m00s [ 4/39] jsoncpp-0:1.9.5-8.fc41.x86_64 100% | 32.3 MiB/s | 99.3 KiB | 00m00s [ 5/39] make-1:4.4.1-8.fc41.x86_64 100% | 81.8 MiB/s | 586.1 KiB | 00m00s [ 6/39] rhash-0:1.4.4-2.fc41.x86_64 100% | 47.8 MiB/s | 196.0 KiB | 00m00s [ 7/39] cmake-0:3.30.8-1.fc41.x86_64 100% | 187.4 MiB/s | 10.9 MiB | 00m00s [ 8/39] cmake-data-0:3.30.8-1.fc41.noar 100% | 117.0 MiB/s | 2.3 MiB | 00m00s [ 9/39] cmake-filesystem-0:3.30.8-1.fc4 100% | 2.1 MiB/s | 16.8 KiB | 00m00s [10/39] libmpc-0:1.3.1-6.fc41.x86_64 100% | 34.7 MiB/s | 71.1 KiB | 00m00s [11/39] gcc-c++-0:14.3.1-3.fc41.x86_64 100% | 159.4 MiB/s | 14.2 MiB | 00m00s [12/39] golist-0:0.10.4-5.fc41.x86_64 100% | 70.4 MiB/s | 1.5 MiB | 00m00s [13/39] go-filesystem-0:3.8.0-1.fc41.x8 100% | 804.7 KiB/s | 8.9 KiB | 00m00s [14/39] python3-zstarfile-0:0.2.0-3.fc4 100% | 1.8 MiB/s | 18.1 KiB | 00m00s [15/39] emacs-filesystem-1:30.0-3.fc41. 100% | 892.7 KiB/s | 7.1 KiB | 00m00s [16/39] python3-0:3.13.7-1.fc41.x86_64 100% | 3.8 MiB/s | 30.8 KiB | 00m00s [17/39] cpp-0:14.3.1-3.fc41.x86_64 100% | 159.4 MiB/s | 12.0 MiB | 00m00s [18/39] python3-libs-0:3.13.7-1.fc41.x8 100% | 126.9 MiB/s | 9.1 MiB | 00m00s [19/39] libb2-0:0.98.1-12.fc41.x86_64 100% | 1.5 MiB/s | 25.7 KiB | 00m00s [20/39] mpdecimal-0:2.5.1-16.fc41.x86_6 100% | 12.4 MiB/s | 89.0 KiB | 00m00s [21/39] gcc-0:14.3.1-3.fc41.x86_64 100% | 224.0 MiB/s | 37.0 MiB | 00m00s [22/39] python3-license-expression-0:30 100% | 3.4 MiB/s | 133.7 KiB | 00m00s [23/39] python3-boolean.py-0:4.0-8.fc41 100% | 3.1 MiB/s | 111.7 KiB | 00m00s [24/39] golang-0:1.24.7-1.fc41.x86_64 100% | 59.5 MiB/s | 670.2 KiB | 00m00s [25/39] libstdc++-devel-0:14.3.1-3.fc41 100% | 53.4 MiB/s | 2.8 MiB | 00m00s [26/39] glibc-devel-0:2.40-28.fc41.x86_ 100% | 48.3 MiB/s | 593.6 KiB | 00m00s [27/39] vim-filesystem-2:9.1.1775-1.fc4 100% | 1.2 MiB/s | 15.4 KiB | 00m00s [28/39] expat-0:2.7.2-1.fc41.x86_64 100% | 11.7 MiB/s | 119.3 KiB | 00m00s [29/39] libuv-1:1.51.0-1.fc41.x86_64 100% | 23.6 MiB/s | 266.0 KiB | 00m00s [30/39] golang-src-0:1.24.7-1.fc41.noar 100% | 100.9 MiB/s | 13.1 MiB | 00m00s [31/39] python-pip-wheel-0:24.2-3.fc41. 100% | 57.2 MiB/s | 1.2 MiB | 00m00s [32/39] tzdata-0:2025b-1.fc41.noarch 100% | 69.7 MiB/s | 713.9 KiB | 00m00s [33/39] kernel-headers-0:6.16.2-100.fc4 100% | 129.5 MiB/s | 1.7 MiB | 00m00s [34/39] libxcrypt-devel-0:4.4.38-7.fc41 100% | 3.6 MiB/s | 29.4 KiB | 00m00s [35/39] annobin-plugin-gcc-0:12.69-1.fc 100% | 135.5 MiB/s | 971.0 KiB | 00m00s [36/39] gcc-plugin-annobin-0:14.3.1-3.f 100% | 10.8 MiB/s | 66.5 KiB | 00m00s [37/39] annobin-docs-0:12.69-1.fc41.noa 100% | 17.9 MiB/s | 91.8 KiB | 00m00s [38/39] cmake-rpm-macros-0:3.30.8-1.fc4 100% | 3.2 MiB/s | 16.3 KiB | 00m00s [39/39] golang-bin-0:1.24.7-1.fc41.x86_ 100% | 143.6 MiB/s | 29.3 MiB | 00m00s -------------------------------------------------------------------------------- [39/39] Total 100% | 317.7 MiB/s | 140.1 MiB | 00m00s Running transaction [ 1/41] Verify package files 100% | 98.0 B/s | 39.0 B | 00m00s [ 2/41] Prepare transaction 100% | 513.0 B/s | 39.0 B | 00m00s [ 3/41] Installing libmpc-0:1.3.1-6.fc4 100% | 162.3 MiB/s | 166.2 KiB | 00m00s [ 4/41] Installing expat-0:2.7.2-1.fc41 100% | 301.6 MiB/s | 308.9 KiB | 00m00s [ 5/41] Installing go-filesystem-0:3.8. 100% | 0.0 B/s | 392.0 B | 00m00s [ 6/41] Installing cmake-filesystem-0:3 100% | 7.1 MiB/s | 7.3 KiB | 00m00s [ 7/41] Installing make-1:4.4.1-8.fc41. 100% | 360.0 MiB/s | 1.8 MiB | 00m00s [ 8/41] Installing cpp-0:14.3.1-3.fc41. 100% | 388.8 MiB/s | 35.0 MiB | 00m00s [ 9/41] Installing annobin-docs-0:12.69 100% | 48.2 MiB/s | 98.8 KiB | 00m00s [10/41] Installing kernel-headers-0:6.1 100% | 227.6 MiB/s | 6.8 MiB | 00m00s [11/41] Installing libxcrypt-devel-0:4. 100% | 16.2 MiB/s | 33.1 KiB | 00m00s [12/41] Installing glibc-devel-0:2.40-2 100% | 178.8 MiB/s | 2.3 MiB | 00m00s [13/41] Installing gcc-0:14.3.1-3.fc41. 100% | 436.7 MiB/s | 104.4 MiB | 00m00s [14/41] Installing tzdata-0:2025b-1.fc4 100% | 65.2 MiB/s | 1.9 MiB | 00m00s [15/41] Installing python-pip-wheel-0:2 100% | 620.9 MiB/s | 1.2 MiB | 00m00s [16/41] Installing libuv-1:1.51.0-1.fc4 100% | 280.5 MiB/s | 574.5 KiB | 00m00s [17/41] Installing vim-filesystem-2:9.1 100% | 2.3 MiB/s | 4.7 KiB | 00m00s [18/41] Installing libstdc++-devel-0:14 100% | 318.1 MiB/s | 15.6 MiB | 00m00s [19/41] Installing golang-src-0:1.24.7- 100% | 315.5 MiB/s | 80.1 MiB | 00m00s [20/41] Installing golang-0:1.24.7-1.fc 100% | 639.3 MiB/s | 9.0 MiB | 00m00s [21/41] Installing golang-bin-0:1.24.7- 100% | 476.9 MiB/s | 121.6 MiB | 00m00s [22/41] Installing mpdecimal-0:2.5.1-16 100% | 201.2 MiB/s | 206.0 KiB | 00m00s [23/41] Installing libb2-0:0.98.1-12.fc 100% | 8.5 MiB/s | 43.3 KiB | 00m00s [24/41] Installing python3-libs-0:3.13. 100% | 347.0 MiB/s | 40.9 MiB | 00m00s [25/41] Installing python3-0:3.13.7-1.f 100% | 33.8 MiB/s | 34.6 KiB | 00m00s [26/41] Installing cmake-rpm-macros-0:3 100% | 0.0 B/s | 8.3 KiB | 00m00s [27/41] Installing python3-zstarfile-0: 100% | 26.8 MiB/s | 27.5 KiB | 00m00s [28/41] Installing python3-boolean.py-0 100% | 259.6 MiB/s | 531.7 KiB | 00m00s [29/41] Installing python3-license-expr 100% | 545.1 MiB/s | 1.1 MiB | 00m00s [30/41] Installing emacs-filesystem-1:3 100% | 0.0 B/s | 544.0 B | 00m00s [31/41] Installing golist-0:0.10.4-5.fc 100% | 424.9 MiB/s | 4.2 MiB | 00m00s [32/41] Installing rhash-0:1.4.4-2.fc41 100% | 173.4 MiB/s | 355.1 KiB | 00m00s [33/41] Installing jsoncpp-0:1.9.5-8.fc 100% | 31.1 MiB/s | 254.9 KiB | 00m00s [34/41] Installing cmake-data-0:3.30.8- 100% | 135.3 MiB/s | 8.8 MiB | 00m00s [35/41] Installing cmake-0:3.30.8-1.fc4 100% | 424.9 MiB/s | 32.7 MiB | 00m00s [36/41] Installing go-rpm-macros-0:3.8. 100% | 97.2 MiB/s | 99.5 KiB | 00m00s [37/41] Installing go-vendor-tools-0:0. 100% | 110.7 MiB/s | 340.2 KiB | 00m00s [38/41] Installing gcc-c++-0:14.3.1-3.f 100% | 419.4 MiB/s | 38.2 MiB | 00m00s [39/41] Installing annobin-plugin-gcc-0 100% | 74.1 MiB/s | 986.7 KiB | 00m00s [40/41] Installing gcc-plugin-annobin-0 100% | 4.7 MiB/s | 62.7 KiB | 00m00s [41/41] Installing systemd-rpm-macros-0 100% | 95.3 KiB/s | 11.2 KiB | 00m00s Complete! Finish: build setup for ollama-0.12.3-1.fc41.src.rpm Start: rpmbuild ollama-0.12.3-1.fc41.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%mkbuilddir): /bin/sh -e /var/tmp/rpm-tmp.8E7Nda Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.uZNt6b + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd /builddir/build/BUILD/ollama-0.12.3-build + rm -rf ollama-0.12.3 + /usr/lib/rpm/rpmuncompress -x /builddir/build/SOURCES/ollama-0.12.3.tar.gz + STATUS=0 + '[' 0 -ne 0 ']' + cd ollama-0.12.3 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + rm -fr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/vendor + [[ ! -e /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin ]] + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin' + export GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + [[ ! -e /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama ]] ++ dirname /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama' + ln -fs /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + /usr/lib/rpm/rpmuncompress -x /builddir/build/SOURCES/vendor.tar.bz2 + STATUS=0 + '[' 0 -ne 0 ']' + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/remove-runtime-for-cuda-and-rocm.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/replace-library-paths.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/vendor-pdevine-tensor-fix-cannonical-import-paths.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + cp /builddir/build/SOURCES/LICENSE.sentencepiece convert/sentencepiece/LICENSE + RPM_EC=0 ++ jobs -p + exit 0 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.Aq3TWn + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc41.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: Additional repo https_developer_downlo 100% | 20.7 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 20.7 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 8.0 KiB/s | 1.5 KiB | 00m00s fedora 100% | 87.7 KiB/s | 30.9 KiB | 00m00s updates 100% | 25.9 KiB/s | 6.2 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: askalono-cli x86_64 0.5.0-1.fc41 updates 4.6 MiB Transaction Summary: Installing: 1 package Package "cmake-3.30.8-1.fc41.x86_64" is already installed. Package "gcc-c++-14.3.1-3.fc41.x86_64" is already installed. Package "go-rpm-macros-3.8.0-1.fc41.x86_64" is already installed. Package "go-vendor-tools-0.8.0-1.fc41.noarch" is already installed. Package "systemd-rpm-macros-256.17-1.fc41.noarch" is already installed. Total size of inbound packages is 2 MiB. Need to download 2 MiB. After this operation, 5 MiB extra will be used (install 5 MiB, remove 0 B). [1/1] askalono-cli-0:0.5.0-1.fc41.x86_6 100% | 85.8 MiB/s | 2.3 MiB | 00m00s -------------------------------------------------------------------------------- [1/1] Total 100% | 82.7 MiB/s | 2.3 MiB | 00m00s Running transaction [1/3] Verify package files 100% | 166.0 B/s | 1.0 B | 00m00s [2/3] Prepare transaction 100% | 71.0 B/s | 1.0 B | 00m00s [3/3] Installing askalono-cli-0:0.5.0-1 100% | 217.2 MiB/s | 4.6 MiB | 00m00s Complete! Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.LBQpn6 + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc41.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: Additional repo https_developer_downlo 100% | 19.1 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 19.1 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 7.4 KiB/s | 1.5 KiB | 00m00s fedora 100% | 76.2 KiB/s | 30.9 KiB | 00m00s updates 100% | 24.2 KiB/s | 6.2 KiB | 00m00s Repositories loaded. Nothing to do. Package "askalono-cli-0.5.0-1.fc41.x86_64" is already installed. Package "cmake-3.30.8-1.fc41.x86_64" is already installed. Package "gcc-c++-14.3.1-3.fc41.x86_64" is already installed. Package "go-rpm-macros-3.8.0-1.fc41.x86_64" is already installed. Package "go-vendor-tools-0.8.0-1.fc41.noarch" is already installed. Package "systemd-rpm-macros-256.17-1.fc41.noarch" is already installed. Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.1OI9eV + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.lNSftH + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + export 'GO_LDFLAGS= -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release' + GO_LDFLAGS=' -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release' ++ echo ollama-0.12.3-1.fc41-1759536000 ++ sha1sum ++ cut -d ' ' -f1 + GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + GO111MODULE=on + go build -buildmode pie -compiler gc '-tags=rpm_crashtraceback ' -a -v -ldflags ' -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release -X github.com/ollama/ollama/version=0.12.3 -B 0xb7318585fe62505c73556e658ffbf0ff4f106451 -compressdwarf=false -linkmode=external -extldflags '\''-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes '\''' -o /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama github.com/ollama/ollama internal/unsafeheader internal/byteorder internal/goarch internal/coverage/rtcov internal/cpu internal/abi internal/chacha8rand internal/godebugs internal/goexperiment internal/bytealg internal/goos internal/profilerecord internal/runtime/atomic internal/asan internal/msan internal/runtime/math internal/runtime/sys internal/runtime/syscall internal/runtime/exithook internal/stringslite sync/atomic math/bits internal/itoa unicode unicode/utf8 cmp math crypto/internal/fips140deps/byteorder internal/race crypto/internal/fips140deps/cpu internal/runtime/maps internal/sync crypto/internal/fips140/alias crypto/internal/fips140/subtle crypto/internal/boring/sig encoding unicode/utf16 github.com/rivo/uniseg internal/nettrace vendor/golang.org/x/crypto/cryptobyte/asn1 golang.org/x/crypto/internal/alias log/internal log/slog/internal github.com/ollama/ollama/version container/list vendor/golang.org/x/crypto/internal/alias runtime golang.org/x/text/encoding/internal/identifier golang.org/x/text/internal/utf8internal github.com/ollama/ollama/fs hash/maphash image/color golang.org/x/image/math/f64 github.com/gin-gonic/gin/internal/bytesconv golang.org/x/net/html/atom github.com/go-playground/locales/currency github.com/leodido/go-urn/scim/schema github.com/pelletier/go-toml/v2/internal/characters google.golang.org/protobuf/internal/flags github.com/d4l3k/go-bfloat16 google.golang.org/protobuf/internal/set github.com/apache/arrow/go/arrow/internal/debug golang.org/x/xerrors/internal github.com/chewxy/math32 gorgonia.org/vecf64 math/cmplx gonum.org/v1/gonum/blas gonum.org/v1/gonum/internal/asm/c128 gorgonia.org/vecf32 gonum.org/v1/gonum/internal/math32 gonum.org/v1/gonum/lapack gonum.org/v1/gonum/internal/asm/f64 gonum.org/v1/gonum/internal/cmplx64 gonum.org/v1/gonum/internal/asm/c64 gonum.org/v1/gonum/internal/asm/f32 gonum.org/v1/gonum/mathext/internal/amos gonum.org/v1/gonum/mathext/internal/gonum gonum.org/v1/gonum/mathext/internal/cephes github.com/ollama/ollama/server/internal/internal/stringsx github.com/agnivade/levenshtein gonum.org/v1/gonum/mathext internal/reflectlite iter crypto/subtle sync slices weak maps errors sort internal/oserror strconv internal/bisect syscall internal/godebug io path bytes strings hash crypto crypto/internal/fips140deps/godebug internal/testlog math/rand/v2 reflect crypto/internal/randutil math/rand time bufio crypto/internal/fips140 crypto/internal/impl crypto/internal/fips140/sha256 crypto/internal/fips140/sha3 crypto/internal/fips140/sha512 internal/syscall/unix internal/syscall/execenv crypto/internal/fips140/hmac regexp/syntax crypto/internal/fips140/check crypto/internal/fips140/aes context io/fs internal/poll internal/filepathlite crypto/internal/fips140/edwards25519/field crypto/internal/fips140/edwards25519 regexp os vendor/golang.org/x/net/dns/dnsmessage internal/singleflight internal/fmtsort encoding/binary unique runtime/cgo net/netip encoding/base64 crypto/internal/sysrand fmt crypto/internal/entropy crypto/internal/fips140/drbg crypto/internal/fips140/ed25519 crypto/internal/fips140only crypto/internal/fips140/aes/gcm math/big crypto/cipher encoding/json crypto/internal/boring encoding/pem golang.org/x/sys/unix github.com/mattn/go-runewidth crypto/rand encoding/csv crypto/ed25519 github.com/olekukonko/tablewriter crypto/md5 crypto/sha1 database/sql/driver encoding/hex crypto/aes crypto/des crypto/dsa crypto/internal/fips140/nistec/fiat crypto/internal/boring/bbig net crypto/internal/fips140/bigmod github.com/containerd/console crypto/sha3 crypto/internal/fips140hash crypto/sha512 encoding/asn1 crypto/hmac crypto/rc4 crypto/internal/fips140/rsa crypto/internal/fips140/nistec vendor/golang.org/x/crypto/cryptobyte crypto/rsa crypto/sha256 crypto/x509/pkix net/url path/filepath golang.org/x/crypto/chacha20 golang.org/x/crypto/internal/poly1305 log golang.org/x/crypto/blowfish golang.org/x/crypto/ssh/internal/bcrypt_pbkdf log/slog/internal/buffer log/slog github.com/ollama/ollama/format compress/flate crypto/internal/fips140/ecdh crypto/ecdh crypto/elliptic crypto/internal/fips140/ecdsa golang.org/x/crypto/curve25519 hash/crc32 github.com/ollama/ollama/types/model compress/gzip crypto/internal/fips140/hkdf crypto/ecdsa crypto/internal/fips140/mlkem crypto/internal/fips140/tls12 crypto/internal/fips140/tls13 vendor/golang.org/x/crypto/chacha20 vendor/golang.org/x/crypto/internal/poly1305 vendor/golang.org/x/sys/cpu crypto/tls/internal/fips140tls vendor/golang.org/x/text/transform vendor/golang.org/x/crypto/chacha20poly1305 vendor/golang.org/x/text/unicode/bidi vendor/golang.org/x/text/unicode/norm crypto/internal/hpke vendor/golang.org/x/net/http2/hpack vendor/golang.org/x/text/secure/bidirule mime mime/quotedprintable net/http/internal net/http/internal/ascii golang.org/x/sync/errgroup golang.org/x/text/transform golang.org/x/text/encoding golang.org/x/text/encoding/internal golang.org/x/text/runes vendor/golang.org/x/net/idna os/user golang.org/x/text/encoding/unicode golang.org/x/term github.com/ollama/ollama/progress github.com/emirpasic/gods/v2/utils github.com/emirpasic/gods/v2/containers github.com/emirpasic/gods/v2/lists github.com/emirpasic/gods/v2/lists/arraylist flag github.com/google/uuid crypto/x509 github.com/ollama/ollama/envconfig net/textproto vendor/golang.org/x/net/http/httpproxy github.com/ollama/ollama/readline vendor/golang.org/x/net/http/httpguts mime/multipart embed github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 github.com/ollama/ollama/llama/llama.cpp/common github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile golang.org/x/crypto/ssh github.com/ollama/ollama/auth crypto/tls github.com/ollama/ollama/llama/llama.cpp/tools/mtmd net/http/httptrace net/http github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu github.com/ollama/ollama/api github.com/ollama/ollama/parser github.com/ollama/ollama/discover github.com/ollama/ollama/fs/util/bufioutil github.com/ollama/ollama/fs/ggml github.com/ollama/ollama/logutil github.com/ollama/ollama/ml container/heap github.com/dlclark/regexp2/syntax github.com/dlclark/regexp2 github.com/emirpasic/gods/v2/trees github.com/emirpasic/gods/v2/trees/binaryheap github.com/ollama/ollama/model/input github.com/ollama/ollama/kvcache github.com/ollama/ollama/ml/nn/rope github.com/ollama/ollama/ml/nn/pooling image golang.org/x/image/bmp hash/adler32 compress/zlib golang.org/x/image/ccitt golang.org/x/image/tiff/lzw golang.org/x/image/tiff io/ioutil golang.org/x/image/riff golang.org/x/image/vp8 golang.org/x/image/vp8l golang.org/x/image/webp image/internal/imageutil image/jpeg image/png golang.org/x/sync/semaphore os/exec github.com/ollama/ollama/runner/common github.com/ollama/ollama/ml/nn github.com/ollama/ollama/ml/nn/fast image/draw golang.org/x/image/draw github.com/ollama/ollama/model/imageproc encoding/xml github.com/gin-contrib/sse github.com/gin-gonic/gin/internal/json golang.org/x/net/html github.com/gabriel-vasile/mimetype/internal/charset debug/dwarf internal/saferio debug/macho github.com/gabriel-vasile/mimetype/internal/json github.com/gabriel-vasile/mimetype/internal/magic github.com/gabriel-vasile/mimetype github.com/go-playground/locales github.com/go-playground/universal-translator github.com/leodido/go-urn golang.org/x/sys/cpu golang.org/x/crypto/sha3 golang.org/x/text/internal/tag golang.org/x/text/internal/language golang.org/x/text/internal/language/compact golang.org/x/text/language github.com/go-playground/validator/v10 github.com/pelletier/go-toml/v2/internal/danger github.com/pelletier/go-toml/v2/unstable github.com/pelletier/go-toml/v2/internal/tracker github.com/pelletier/go-toml/v2 encoding/gob go/token html text/template/parse text/template html/template net/rpc github.com/ugorji/go/codec hash/fnv google.golang.org/protobuf/internal/detrand google.golang.org/protobuf/internal/errors google.golang.org/protobuf/encoding/protowire google.golang.org/protobuf/internal/pragma google.golang.org/protobuf/reflect/protoreflect google.golang.org/protobuf/internal/encoding/messageset google.golang.org/protobuf/internal/genid google.golang.org/protobuf/internal/order google.golang.org/protobuf/internal/strs google.golang.org/protobuf/reflect/protoregistry google.golang.org/protobuf/runtime/protoiface google.golang.org/protobuf/proto gopkg.in/yaml.v3 github.com/gin-gonic/gin/binding github.com/gin-gonic/gin/render github.com/mattn/go-isatty golang.org/x/text/unicode/bidi golang.org/x/text/secure/bidirule golang.org/x/text/unicode/norm golang.org/x/net/idna golang.org/x/net/http/httpguts golang.org/x/net/http2/hpack golang.org/x/net/internal/httpcommon golang.org/x/net/http2 golang.org/x/net/http2/h2c net/http/httputil github.com/gin-gonic/gin github.com/gin-contrib/cors archive/tar archive/zip golang.org/x/text/encoding/unicode/utf32 github.com/nlpodyssey/gopickle/types github.com/nlpodyssey/gopickle/pickle github.com/nlpodyssey/gopickle/pytorch google.golang.org/protobuf/internal/descfmt google.golang.org/protobuf/internal/descopts google.golang.org/protobuf/internal/editiondefaults google.golang.org/protobuf/internal/encoding/text google.golang.org/protobuf/internal/encoding/defval google.golang.org/protobuf/internal/filedesc google.golang.org/protobuf/encoding/prototext google.golang.org/protobuf/internal/encoding/tag google.golang.org/protobuf/internal/impl google.golang.org/protobuf/internal/filetype google.golang.org/protobuf/internal/version google.golang.org/protobuf/runtime/protoimpl github.com/ollama/ollama/convert/sentencepiece github.com/apache/arrow/go/arrow/endian github.com/apache/arrow/go/arrow/internal/cpu github.com/apache/arrow/go/arrow/memory github.com/apache/arrow/go/arrow/bitutil github.com/apache/arrow/go/arrow/decimal128 github.com/apache/arrow/go/arrow/float16 golang.org/x/xerrors github.com/apache/arrow/go/arrow github.com/apache/arrow/go/arrow/array github.com/apache/arrow/go/arrow/tensor github.com/pkg/errors github.com/xtgo/set github.com/chewxy/hm github.com/google/flatbuffers/go github.com/pdevine/tensor/internal/storage github.com/pdevine/tensor/internal/execution github.com/ollama/ollama/ml/backend/ggml/ggml/src github.com/pdevine/tensor/internal/serialization/fb github.com/gogo/protobuf/proto google.golang.org/protobuf/types/descriptorpb google.golang.org/protobuf/internal/editionssupport google.golang.org/protobuf/types/gofeaturespb google.golang.org/protobuf/reflect/protodesc github.com/golang/protobuf/proto go4.org/unsafe/assume-no-moving-gc gonum.org/v1/gonum/blas/gonum github.com/gogo/protobuf/protoc-gen-gogo/descriptor gonum.org/v1/gonum/floats/scalar gonum.org/v1/gonum/floats github.com/x448/float16 golang.org/x/exp/rand github.com/gogo/protobuf/gogoproto gonum.org/v1/gonum/stat/combin github.com/pdevine/tensor/internal/serialization/pb github.com/ollama/ollama/fs/gguf github.com/ollama/ollama/harmony github.com/ollama/ollama/model/parsers github.com/ollama/ollama/model/renderers github.com/ollama/ollama/openai gonum.org/v1/gonum/blas/blas64 gonum.org/v1/gonum/blas/cblas128 gonum.org/v1/gonum/lapack/gonum github.com/ollama/ollama/server/internal/internal/names github.com/ollama/ollama/server/internal/cache/blob runtime/debug github.com/ollama/ollama/server/internal/internal/backoff github.com/ollama/ollama/template github.com/ollama/ollama/server/internal/client/ollama github.com/ollama/ollama/thinking github.com/ollama/ollama/server/internal/registry github.com/ollama/ollama/tools github.com/ollama/ollama/types/errtypes os/signal github.com/ollama/ollama/types/syncmap github.com/spf13/pflag github.com/spf13/cobra gonum.org/v1/gonum/lapack/lapack64 gonum.org/v1/gonum/mat gonum.org/v1/gonum/stat github.com/pdevine/tensor gonum.org/v1/gonum/stat/distuv github.com/pdevine/tensor/native github.com/ollama/ollama/convert github.com/ollama/ollama/ml/backend/ggml github.com/ollama/ollama/llama/llama.cpp/src github.com/ollama/ollama/ml/backend github.com/ollama/ollama/model github.com/ollama/ollama/model/models/bert github.com/ollama/ollama/model/models/gemma2 github.com/ollama/ollama/model/models/deepseek2 github.com/ollama/ollama/model/models/gemma3 github.com/ollama/ollama/model/models/gemma3n github.com/ollama/ollama/model/models/gptoss github.com/ollama/ollama/model/models/llama github.com/ollama/ollama/model/models/llama4 github.com/ollama/ollama/model/models/mistral3 github.com/ollama/ollama/model/models/mllama github.com/ollama/ollama/model/models/qwen2 github.com/ollama/ollama/model/models/qwen25vl github.com/ollama/ollama/model/models/qwen3 github.com/ollama/ollama/model/models github.com/ollama/ollama/llama github.com/ollama/ollama/sample github.com/ollama/ollama/llm github.com/ollama/ollama/runner/llamarunner github.com/ollama/ollama/runner/ollamarunner github.com/ollama/ollama/server github.com/ollama/ollama/runner github.com/ollama/ollama/cmd github.com/ollama/ollama + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + /usr/bin/cmake -S . -B redhat-linux-build_ggml-cpu -DCMAKE_C_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_CXX_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_Fortran_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_INSTALL_DO_STRIP:BOOL=OFF -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_INSTALL_FULL_SBINDIR:PATH=/usr/sbin -DCMAKE_INSTALL_SBINDIR:PATH=sbin -DINCLUDE_INSTALL_DIR:PATH=/usr/include -DLIB_INSTALL_DIR:PATH=/usr/lib64 -DSYSCONF_INSTALL_DIR:PATH=/etc -DSHARE_INSTALL_PREFIX:PATH=/usr/share -DLIB_SUFFIX=64 -DBUILD_SHARED_LIBS:BOOL=ON --preset CPU Preset CMake variables: CMAKE_BUILD_TYPE="Release" CMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded" -- The C compiler identification is GNU 14.3.1 -- The CXX compiler identification is GNU 14.3.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - NOTFOUND -- Configuring done (0.5s) -- Generating done (0.0s) CMake Warning: Manually-specified variables were not used by the project: CMAKE_Fortran_FLAGS_RELEASE CMAKE_INSTALL_DO_STRIP INCLUDE_INSTALL_DIR LIB_SUFFIX SHARE_INSTALL_PREFIX SYSCONF_INSTALL_DIR -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu + /usr/bin/cmake --build redhat-linux-build_ggml-cpu -j4 --verbose --target ggml-cpu Change Dir: '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' Run Build Command(s): /usr/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f Makefile -j4 ggml-cpu /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/gmake -f CMakeFiles/Makefile2 ggml-cpu gmake[1]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/CMakeFiles 99 /usr/bin/gmake -f CMakeFiles/Makefile2 ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/all gmake[2]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 1%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 2%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF CMakeFiles/ggml-base.dir/ggml.c.o.d -o CMakeFiles/ggml-base.dir/ggml.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5663:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5663 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5656:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5656 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 3%] Built target ggml-cpu-sse42-feats [ 3%] Built target ggml-cpu-x64-feats /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 4%] Built target ggml-cpu-alderlake-feats /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 4%] Built target ggml-cpu-sandybridge-feats [ 4%] Built target ggml-cpu-haswell-feats [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o -MF CMakeFiles/ggml-base.dir/ggml.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp [ 5%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 5%] Built target ggml-cpu-skylakex-feats [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 8%] Built target ggml-cpu-icelake-feats [ 9%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-threading.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 9%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -MF CMakeFiles/ggml-base.dir/gguf.cpp.o.d -o CMakeFiles/ggml-base.dir/gguf.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:4067:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4067 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-base.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -Wl,-soname,libggml-base.so -o ../../../../../lib/ollama/libggml-base.so "CMakeFiles/ggml-base.dir/ggml.c.o" "CMakeFiles/ggml-base.dir/ggml.cpp.o" "CMakeFiles/ggml-base.dir/ggml-alloc.c.o" "CMakeFiles/ggml-base.dir/ggml-backend.cpp.o" "CMakeFiles/ggml-base.dir/ggml-opt.cpp.o" "CMakeFiles/ggml-base.dir/ggml-threading.cpp.o" "CMakeFiles/ggml-base.dir/ggml-quants.c.o" "CMakeFiles/ggml-base.dir/gguf.cpp.o" -lm gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 11%] Built target ggml-base /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 12%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o [ 13%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c [ 15%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o [ 15%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 19%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 24%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 25%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 26%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 27%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 32%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 35%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 36%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 36%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 39%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 39%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 41%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 42%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 44%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o [ 44%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 45%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 46%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 51%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 52%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 53%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 54%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 55%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 56%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-x64.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-x64.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-x64.so "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 57%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sse42.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-sse42.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-sse42.so "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so [ 58%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 59%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 61%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp [ 62%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sandybridge.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-sandybridge.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-sandybridge.so "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 62%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-alderlake.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-alderlake.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-alderlake.so "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 62%] Built target ggml-cpu-x64 gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 62%] Built target ggml-cpu-sse42 /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 63%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 64%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 64%] Built target ggml-cpu-sandybridge [ 64%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 65%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 66%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 67%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 69%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 70%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 71%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 71%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 73%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 73%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 74%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 75%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 77%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 78%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 78%] Built target ggml-cpu-alderlake [ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 81%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 84%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 86%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 86%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 87%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 88%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 92%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 94%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 95%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c [ 95%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-haswell.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-haswell.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-haswell.so "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 96%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 97%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 98%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-skylakex.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-skylakex.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-skylakex.so "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-icelake.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-icelake.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-icelake.so "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-haswell gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-skylakex gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-icelake /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Nothing to be done for 'ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build'. gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu gmake[2]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/CMakeFiles 0 gmake[1]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.vwOWzt + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + '[' /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT '!=' / ']' + rm -rf /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT ++ dirname /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + mkdir -p /builddir/build/BUILD/ollama-0.12.3-build + mkdir /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml install --destdir /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT --install-directory /usr/share/licenses/ollama --filelist licenses.list Using detector: askalono + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin' + install -m 0755 -vp /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ollama' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig' + install -m 0644 -vp /builddir/build/SOURCES/sysconfig-ollama /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig/ollama '/builddir/build/SOURCES/sysconfig-ollama' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig/ollama' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system' + install -m 0644 -vp /builddir/build/SOURCES/ollama.service /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system/ollama.service '/builddir/build/SOURCES/ollama.service' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system/ollama.service' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d' + install -m 0644 -vp /builddir/build/SOURCES/ollama-user.conf /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d/ollama.conf '/builddir/build/SOURCES/ollama-user.conf' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d/ollama.conf' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib/ollama install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib/ollama' + DESTDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + /usr/bin/cmake --install redhat-linux-build_ggml-cpu --component CPU -- Install configuration: "Release" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-base.so -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so" to "" + /usr/bin/find-debuginfo -j4 --strict-build-id -m -i --build-id-seed 0.12.3-1.fc41 --unique-debug-suffix -0.12.3-1.fc41.x86_64 --unique-debug-src-base ollama-0.12.3-1.fc41.x86_64 --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 -S debugsourcefiles.list /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 find-debuginfo: starting Extracting debug info from 9 files warning: Unsupported auto-load script at offset 0 in section .debug_gdb_scripts of file /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ollama. Use `info auto-load python-scripts [REGEXP]' to list them. DWARF-compressing 9 files sepdebugcrcfix: Updated 9 CRC32s, 0 CRC32s did match. Creating .debug symlinks for symlinks to ELF files Copying sources found by 'debugedit -l' to /usr/src/debug/ollama-0.12.3-1.fc41.x86_64 find-debuginfo: done + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/redhat/brp-strip-lto /usr/bin/strip + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/check-rpaths + /usr/lib/rpm/redhat/brp-mangle-shebangs + /usr/lib/rpm/brp-remove-la-files + env /usr/lib/rpm/redhat/brp-python-bytecompile '' 1 0 -j4 + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/bin/add-determinism --brp -j4 /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT Scanned 497 directories and 1626 files, processed 0 inodes, 0 modified (0 replaced + 0 rewritten), 0 unsupported format, 0 errors Reading /builddir/build/BUILD/ollama-0.12.3-build/SPECPARTS/rpm-debuginfo.specpart Executing(%check): /bin/sh -e /var/tmp/rpm-tmp.Cqztj0 + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml report all --verify 'Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSL-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC0-1.0 AND ISC AND LicenseRef-Fedora-Public-Domain AND LicenseRef-scancode-protobuf AND MIT AND NCSA AND NTP AND OpenSSL AND ZPL-2.1 AND Zlib' Using detector: askalono LICENSE: MIT convert/sentencepiece/LICENSE: Apache-2.0 llama/llama.cpp/LICENSE: MIT ml/backend/ggml/ggml/LICENSE: MIT vendor/github.com/agnivade/levenshtein/License.txt: MIT vendor/github.com/apache/arrow/go/arrow/LICENSE.txt: (Apache-2.0 AND BSD-3-Clause) AND BSD-3-Clause AND CC0-1.0 AND (LicenseRef-scancode-public-domain AND MIT) AND Apache-2.0 AND BSL-1.0 AND (BSD-2-Clause AND BSD-3-Clause) AND MIT AND (BSL-1.0 AND BSD-2-Clause) AND BSD-2-Clause AND ZPL-2.1 AND LicenseRef-scancode-protobuf AND NCSA AND (CC-BY-3.0 AND MIT) AND (CC-BY-4.0 AND LicenseRef-scancode-public-domain) AND NTP AND Zlib AND OpenSSL AND (BSD-3-Clause AND BSD-2-Clause) AND (BSD-2-Clause AND Zlib) vendor/github.com/bytedance/sonic/LICENSE: Apache-2.0 vendor/github.com/bytedance/sonic/loader/LICENSE: Apache-2.0 vendor/github.com/chewxy/hm/LICENCE: MIT vendor/github.com/chewxy/math32/LICENSE: BSD-2-Clause vendor/github.com/cloudwego/base64x/LICENSE: Apache-2.0 vendor/github.com/cloudwego/base64x/LICENSE-APACHE: Apache-2.0 vendor/github.com/cloudwego/iasm/LICENSE-APACHE: Apache-2.0 vendor/github.com/containerd/console/LICENSE: Apache-2.0 vendor/github.com/d4l3k/go-bfloat16/LICENSE: MIT vendor/github.com/davecgh/go-spew/LICENSE: ISC vendor/github.com/dlclark/regexp2/LICENSE: MIT vendor/github.com/emirpasic/gods/v2/LICENSE: BSD-2-Clause AND ISC vendor/github.com/gabriel-vasile/mimetype/LICENSE: MIT vendor/github.com/gin-contrib/cors/LICENSE: MIT vendor/github.com/gin-contrib/sse/LICENSE: MIT vendor/github.com/gin-gonic/gin/LICENSE: MIT vendor/github.com/go-playground/locales/LICENSE: MIT vendor/github.com/go-playground/universal-translator/LICENSE: MIT vendor/github.com/go-playground/validator/v10/LICENSE: MIT vendor/github.com/goccy/go-json/LICENSE: MIT vendor/github.com/gogo/protobuf/LICENSE: BSD-3-Clause vendor/github.com/golang/protobuf/LICENSE: BSD-3-Clause vendor/github.com/google/flatbuffers/LICENSE: Apache-2.0 vendor/github.com/google/go-cmp/LICENSE: BSD-3-Clause vendor/github.com/google/uuid/LICENSE: BSD-3-Clause vendor/github.com/inconshreveable/mousetrap/LICENSE: Apache-2.0 vendor/github.com/json-iterator/go/LICENSE: MIT vendor/github.com/klauspost/cpuid/v2/LICENSE: MIT vendor/github.com/leodido/go-urn/LICENSE: MIT vendor/github.com/mattn/go-isatty/LICENSE: MIT vendor/github.com/mattn/go-runewidth/LICENSE: MIT vendor/github.com/modern-go/concurrent/LICENSE: Apache-2.0 vendor/github.com/modern-go/reflect2/LICENSE: Apache-2.0 vendor/github.com/nlpodyssey/gopickle/LICENSE: BSD-2-Clause vendor/github.com/olekukonko/tablewriter/LICENSE.md: MIT vendor/github.com/pdevine/tensor/LICENCE: Apache-2.0 vendor/github.com/pelletier/go-toml/v2/LICENSE: MIT vendor/github.com/pkg/errors/LICENSE: BSD-2-Clause vendor/github.com/pmezard/go-difflib/LICENSE: BSD-3-Clause vendor/github.com/rivo/uniseg/LICENSE.txt: MIT vendor/github.com/spf13/cobra/LICENSE.txt: Apache-2.0 vendor/github.com/spf13/pflag/LICENSE: BSD-3-Clause vendor/github.com/stretchr/testify/LICENSE: MIT vendor/github.com/twitchyliquid64/golang-asm/LICENSE: BSD-3-Clause vendor/github.com/ugorji/go/codec/LICENSE: MIT vendor/github.com/x448/float16/LICENSE: MIT vendor/github.com/xtgo/set/LICENSE: BSD-2-Clause vendor/go4.org/unsafe/assume-no-moving-gc/LICENSE: BSD-3-Clause vendor/golang.org/x/arch/LICENSE: BSD-3-Clause vendor/golang.org/x/crypto/LICENSE: BSD-3-Clause vendor/golang.org/x/exp/LICENSE: BSD-3-Clause vendor/golang.org/x/image/LICENSE: BSD-3-Clause vendor/golang.org/x/net/LICENSE: BSD-3-Clause vendor/golang.org/x/sync/LICENSE: BSD-3-Clause vendor/golang.org/x/sys/LICENSE: BSD-3-Clause vendor/golang.org/x/term/LICENSE: BSD-3-Clause vendor/golang.org/x/text/LICENSE: BSD-3-Clause vendor/golang.org/x/tools/LICENSE: BSD-3-Clause vendor/golang.org/x/xerrors/LICENSE: BSD-3-Clause vendor/gonum.org/v1/gonum/LICENSE: BSD-3-Clause vendor/google.golang.org/protobuf/LICENSE: BSD-3-Clause vendor/gopkg.in/yaml.v3/LICENSE: MIT AND (MIT AND Apache-2.0) vendor/gorgonia.org/vecf32/LICENSE: MIT vendor/gorgonia.org/vecf64/LICENSE: MIT Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSL-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC0-1.0 AND ISC AND LicenseRef-Fedora-Public-Domain AND LicenseRef-scancode-protobuf AND MIT AND NCSA AND NTP AND OpenSSL AND ZPL-2.1 AND Zlib + GO_LDFLAGS=' -X github.com/ollama/ollama/version=0.12.3' + GO_TEST_FLAGS='-buildmode pie -compiler gc' + GO_TEST_EXT_LD_FLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + go-rpm-integration check -i github.com/ollama/ollama -b /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin -s /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build -V 0.12.3-1.fc41 -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT -g /usr/share/gocode -r '.*example.*' Testing in: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src PATH: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/sbin GOPATH: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode GO111MODULE: off command: go test -buildmode pie -compiler gc -ldflags " -X github.com/ollama/ollama/version=0.12.3 -extldflags '-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes '" testing: github.com/ollama/ollama github.com/ollama/ollama/api 2025/10/04 04:42:11 http: superfluous response.WriteHeader call from github.com/ollama/ollama/api.TestClientStream.func1.1 (client_test.go:128) PASS ok github.com/ollama/ollama/api 0.010s github.com/ollama/ollama/api 2025/10/04 04:42:11 http: superfluous response.WriteHeader call from github.com/ollama/ollama/api.TestClientStream.func1.1 (client_test.go:128) PASS ok github.com/ollama/ollama/api 0.010s github.com/ollama/ollama/app/assets ? github.com/ollama/ollama/app/assets [no test files] github.com/ollama/ollama/app/lifecycle PASS ok github.com/ollama/ollama/app/lifecycle 0.003s github.com/ollama/ollama/app/lifecycle PASS ok github.com/ollama/ollama/app/lifecycle 0.003s github.com/ollama/ollama/app/store ? github.com/ollama/ollama/app/store [no test files] github.com/ollama/ollama/app/tray ? github.com/ollama/ollama/app/tray [no test files] github.com/ollama/ollama/app/tray/commontray ? github.com/ollama/ollama/app/tray/commontray [no test files] github.com/ollama/ollama/auth ? github.com/ollama/ollama/auth [no test files] github.com/ollama/ollama/cmd [?25l[?2026h[?25l[?25h[?2026l[?25hdeleted 'test-model' [?25l[?2026h[?25l[?25h[?2026l[?25hCouldn't find '/tmp/TestPushHandlersuccessful_push958399944/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDmPssbYIeAmX4BSL1twqFCU7u/APTftQG8qNmVnGqAr Couldn't find '/tmp/TestPushHandlernot_signed_in_push2299012968/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDIbj/OS9at60qjcJcvcVwLueK21rwnnyTq7dozt579X Couldn't find '/tmp/TestPushHandlerunauthorized_push548191192/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILPec8ZiFR6ZIUeruD8l2fMhjRMVOKLY16r89YkSLNZU Added image '/tmp/TestExtractFileDataRemovesQuotedFilepath2377739235/001/img.jpg' PASS ok github.com/ollama/ollama/cmd 0.022s github.com/ollama/ollama/cmd [?25l[?2026h[?25l[?25h[?2026l[?25hdeleted 'test-model' [?25l[?2026h[?25l[?25h[?2026l[?25hCouldn't find '/tmp/TestPushHandlersuccessful_push4193145132/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOCutZHY0jcE2yx3vkGcl/rXBb12slrgL4jYw/FdERcR Couldn't find '/tmp/TestPushHandlernot_signed_in_push2033702257/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIbiaT99uea4TkrayiY143HsyLBFhy4CYrfCrjOGsd0W Couldn't find '/tmp/TestPushHandlerunauthorized_push2734522862/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMhgUxJfvq7e9EWdQdQvfyAFKIjtkQJyCVL3a8ENpSMY Added image '/tmp/TestExtractFileDataRemovesQuotedFilepath3848326413/001/img.jpg' PASS ok github.com/ollama/ollama/cmd 0.023s github.com/ollama/ollama/convert PASS ok github.com/ollama/ollama/convert 0.013s github.com/ollama/ollama/convert PASS ok github.com/ollama/ollama/convert 0.014s github.com/ollama/ollama/convert/sentencepiece ? github.com/ollama/ollama/convert/sentencepiece [no test files] github.com/ollama/ollama/discover 2025/10/04 04:44:21 INFO example scenario="#5554 Docker Ollama container inside the LXC" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 04:44:21 INFO example scenario="#5554 LXC direct output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 04:44:21 INFO example scenario="#5554 LXC docker container output" cpus="[{ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:29 EfficiencyCoreCount:0 ThreadCount:29}]" 2025/10/04 04:44:21 INFO example scenario="#5554 LXC docker output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 04:44:21 INFO example scenario="#7359 VMware multi-core core VM" cpus="[{ID:0 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:10 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:12 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:14 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:2 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:4 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:6 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:8 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1}]" 2025/10/04 04:44:21 INFO example scenario="#7287 HyperV 2 socket exposed to VM" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 04:44:21 INFO looking for compatible GPUs 2025/10/04 04:44:21 INFO no compatible GPUs were discovered PASS ok github.com/ollama/ollama/discover 0.007s github.com/ollama/ollama/discover 2025/10/04 04:44:22 INFO example scenario="#5554 LXC docker output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 04:44:22 INFO example scenario="#7359 VMware multi-core core VM" cpus="[{ID:0 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:10 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:12 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:14 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:2 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:4 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:6 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:8 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1}]" 2025/10/04 04:44:22 INFO example scenario="#7287 HyperV 2 socket exposed to VM" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 04:44:22 INFO example scenario="#5554 Docker Ollama container inside the LXC" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 04:44:22 INFO example scenario="#5554 LXC direct output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 04:44:22 INFO example scenario="#5554 LXC docker container output" cpus="[{ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:29 EfficiencyCoreCount:0 ThreadCount:29}]" 2025/10/04 04:44:22 INFO looking for compatible GPUs 2025/10/04 04:44:22 INFO no compatible GPUs were discovered PASS ok github.com/ollama/ollama/discover 0.006s github.com/ollama/ollama/envconfig 2025/10/04 04:44:22 WARN invalid port, using default port=66000 default=11434 2025/10/04 04:44:22 WARN invalid port, using default port=-1 default=11434 2025/10/04 04:44:22 WARN invalid environment variable, using default key=OLLAMA_UINT value=-1 default=11434 2025/10/04 04:44:22 WARN invalid environment variable, using default key=OLLAMA_UINT value=0o10 default=11434 2025/10/04 04:44:22 WARN invalid environment variable, using default key=OLLAMA_UINT value=0x10 default=11434 2025/10/04 04:44:22 WARN invalid environment variable, using default key=OLLAMA_UINT value=string default=11434 PASS ok github.com/ollama/ollama/envconfig 0.005s github.com/ollama/ollama/envconfig 2025/10/04 04:44:22 WARN invalid port, using default port=66000 default=11434 2025/10/04 04:44:22 WARN invalid port, using default port=-1 default=11434 2025/10/04 04:44:22 WARN invalid environment variable, using default key=OLLAMA_UINT value=-1 default=11434 2025/10/04 04:44:22 WARN invalid environment variable, using default key=OLLAMA_UINT value=0o10 default=11434 2025/10/04 04:44:22 WARN invalid environment variable, using default key=OLLAMA_UINT value=0x10 default=11434 2025/10/04 04:44:22 WARN invalid environment variable, using default key=OLLAMA_UINT value=string default=11434 PASS ok github.com/ollama/ollama/envconfig 0.004s github.com/ollama/ollama/format PASS ok github.com/ollama/ollama/format 0.002s github.com/ollama/ollama/format PASS ok github.com/ollama/ollama/format 0.002s github.com/ollama/ollama/fs ? github.com/ollama/ollama/fs [no test files] github.com/ollama/ollama/fs/ggml PASS ok github.com/ollama/ollama/fs/ggml 0.004s github.com/ollama/ollama/fs/ggml PASS ok github.com/ollama/ollama/fs/ggml 0.004s github.com/ollama/ollama/fs/gguf PASS ok github.com/ollama/ollama/fs/gguf 0.004s github.com/ollama/ollama/fs/gguf PASS ok github.com/ollama/ollama/fs/gguf 0.004s github.com/ollama/ollama/fs/util/bufioutil PASS ok github.com/ollama/ollama/fs/util/bufioutil 0.002s github.com/ollama/ollama/fs/util/bufioutil PASS ok github.com/ollama/ollama/fs/util/bufioutil 0.002s github.com/ollama/ollama/harmony event: {} event: {Header:{Role:user Channel: Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_weather}} event: {Content:{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|message|>{"sunny": true, "temperature": 20}} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:User asks weather in SF. We need location. Use get_current_weather with location "San Francisco, CA".} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_current_weather}} event: {Content:{"location":"San Francisco, CA"}<|call|>} PASS ok github.com/ollama/ollama/harmony 0.003s github.com/ollama/ollama/harmony event: {} event: {Header:{Role:user Channel: Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_weather}} event: {Content:{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|message|>{"sunny": true, "temperature": 20}} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:User asks weather in SF. We need location. Use get_current_weather with location "San Francisco, CA".} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_current_weather}} event: {Content:{"location":"San Francisco, CA"}<|call|>} PASS ok github.com/ollama/ollama/harmony 0.003s github.com/ollama/ollama/kvcache PASS ok github.com/ollama/ollama/kvcache 0.002s github.com/ollama/ollama/kvcache PASS ok github.com/ollama/ollama/kvcache 0.002s github.com/ollama/ollama/llama PASS ok github.com/ollama/ollama/llama 0.004s github.com/ollama/ollama/llama PASS ok github.com/ollama/ollama/llama 0.004s github.com/ollama/ollama/llama/llama.cpp/common ? github.com/ollama/ollama/llama/llama.cpp/common [no test files] github.com/ollama/ollama/llama/llama.cpp/src ? github.com/ollama/ollama/llama/llama.cpp/src [no test files] github.com/ollama/ollama/llama/llama.cpp/tools/mtmd ? github.com/ollama/ollama/llama/llama.cpp/tools/mtmd [no test files] github.com/ollama/ollama/llm 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection PASS ok github.com/ollama/ollama/llm 0.006s github.com/ollama/ollama/llm 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection 2025/10/04 04:44:32 INFO aborting completion request due to client closing the connection PASS ok github.com/ollama/ollama/llm 0.006s github.com/ollama/ollama/logutil ? github.com/ollama/ollama/logutil [no test files] github.com/ollama/ollama/ml ? github.com/ollama/ollama/ml [no test files] github.com/ollama/ollama/ml/backend ? github.com/ollama/ollama/ml/backend [no test files] github.com/ollama/ollama/ml/backend/ggml ? github.com/ollama/ollama/ml/backend/ggml [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src ? github.com/ollama/ollama/ml/backend/ggml/ggml/src [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile [no test files] github.com/ollama/ollama/ml/nn ? github.com/ollama/ollama/ml/nn [no test files] github.com/ollama/ollama/ml/nn/fast ? github.com/ollama/ollama/ml/nn/fast [no test files] github.com/ollama/ollama/ml/nn/pooling 2025/10/04 04:44:34 INFO looking for compatible GPUs 2025/10/04 04:44:34 INFO no compatible GPUs were discovered 2025/10/04 04:44:34 INFO architecture=test file_type=unknown name="" description="" num_tensors=1 num_key_values=3 2025/10/04 04:44:34 INFO system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) PASS ok github.com/ollama/ollama/ml/nn/pooling 0.009s github.com/ollama/ollama/ml/nn/pooling 2025/10/04 04:44:35 INFO looking for compatible GPUs 2025/10/04 04:44:35 INFO no compatible GPUs were discovered 2025/10/04 04:44:35 INFO architecture=test file_type=unknown name="" description="" num_tensors=1 num_key_values=3 2025/10/04 04:44:35 INFO system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) PASS ok github.com/ollama/ollama/ml/nn/pooling 0.011s github.com/ollama/ollama/ml/nn/rope ? github.com/ollama/ollama/ml/nn/rope [no test files] github.com/ollama/ollama/model time=2025-10-04T04:44:36.225Z level=DEBUG msg="adding bos token to prompt" id=1 time=2025-10-04T04:44:36.225Z level=DEBUG msg="adding eos token to prompt" id=2 PASS ok github.com/ollama/ollama/model 0.225s github.com/ollama/ollama/model time=2025-10-04T04:44:36.927Z level=DEBUG msg="adding bos token to prompt" id=1 time=2025-10-04T04:44:36.927Z level=DEBUG msg="adding eos token to prompt" id=2 PASS ok github.com/ollama/ollama/model 0.244s github.com/ollama/ollama/model/imageproc PASS ok github.com/ollama/ollama/model/imageproc 0.021s github.com/ollama/ollama/model/imageproc PASS ok github.com/ollama/ollama/model/imageproc 0.021s github.com/ollama/ollama/model/input ? github.com/ollama/ollama/model/input [no test files] github.com/ollama/ollama/model/models ? github.com/ollama/ollama/model/models [no test files] github.com/ollama/ollama/model/models/bert ? github.com/ollama/ollama/model/models/bert [no test files] github.com/ollama/ollama/model/models/deepseek2 ? github.com/ollama/ollama/model/models/deepseek2 [no test files] github.com/ollama/ollama/model/models/gemma2 ? github.com/ollama/ollama/model/models/gemma2 [no test files] github.com/ollama/ollama/model/models/gemma3 ? github.com/ollama/ollama/model/models/gemma3 [no test files] github.com/ollama/ollama/model/models/gemma3n ? github.com/ollama/ollama/model/models/gemma3n [no test files] github.com/ollama/ollama/model/models/gptoss ? github.com/ollama/ollama/model/models/gptoss [no test files] github.com/ollama/ollama/model/models/llama ? github.com/ollama/ollama/model/models/llama [no test files] github.com/ollama/ollama/model/models/llama4 PASS ok github.com/ollama/ollama/model/models/llama4 0.012s github.com/ollama/ollama/model/models/llama4 PASS ok github.com/ollama/ollama/model/models/llama4 0.012s github.com/ollama/ollama/model/models/mistral3 ? github.com/ollama/ollama/model/models/mistral3 [no test files] github.com/ollama/ollama/model/models/mllama PASS ok github.com/ollama/ollama/model/models/mllama 0.455s github.com/ollama/ollama/model/models/mllama PASS ok github.com/ollama/ollama/model/models/mllama 0.473s github.com/ollama/ollama/model/models/qwen2 ? github.com/ollama/ollama/model/models/qwen2 [no test files] github.com/ollama/ollama/model/models/qwen25vl ? github.com/ollama/ollama/model/models/qwen25vl [no test files] github.com/ollama/ollama/model/models/qwen3 ? github.com/ollama/ollama/model/models/qwen3 [no test files] github.com/ollama/ollama/model/parsers PASS ok github.com/ollama/ollama/model/parsers 0.004s github.com/ollama/ollama/model/parsers PASS ok github.com/ollama/ollama/model/parsers 0.004s github.com/ollama/ollama/model/renderers PASS ok github.com/ollama/ollama/model/renderers 0.004s github.com/ollama/ollama/model/renderers PASS ok github.com/ollama/ollama/model/renderers 0.003s github.com/ollama/ollama/openai PASS ok github.com/ollama/ollama/openai 0.010s github.com/ollama/ollama/openai PASS ok github.com/ollama/ollama/openai 0.010s github.com/ollama/ollama/parser PASS ok github.com/ollama/ollama/parser 0.006s github.com/ollama/ollama/parser PASS ok github.com/ollama/ollama/parser 0.006s github.com/ollama/ollama/progress ? github.com/ollama/ollama/progress [no test files] github.com/ollama/ollama/readline ? github.com/ollama/ollama/readline [no test files] github.com/ollama/ollama/runner ? github.com/ollama/ollama/runner [no test files] github.com/ollama/ollama/runner/common PASS ok github.com/ollama/ollama/runner/common 0.002s github.com/ollama/ollama/runner/common PASS ok github.com/ollama/ollama/runner/common 0.002s github.com/ollama/ollama/runner/llamarunner PASS ok github.com/ollama/ollama/runner/llamarunner 0.004s github.com/ollama/ollama/runner/llamarunner PASS ok github.com/ollama/ollama/runner/llamarunner 0.004s github.com/ollama/ollama/runner/ollamarunner PASS ok github.com/ollama/ollama/runner/ollamarunner 0.004s github.com/ollama/ollama/runner/ollamarunner PASS ok github.com/ollama/ollama/runner/ollamarunner 0.005s github.com/ollama/ollama/sample PASS ok github.com/ollama/ollama/sample 0.169s github.com/ollama/ollama/sample PASS ok github.com/ollama/ollama/sample 0.164s github.com/ollama/ollama/server time=2025-10-04T04:44:49.985Z level=INFO source=logging.go:32 msg="ollama app started" time=2025-10-04T04:44:49.987Z level=DEBUG source=convert.go:232 msg="vocabulary is smaller than expected, padding with dummy tokens" expect=32000 actual=1 time=2025-10-04T04:44:49.992Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=general.file_type type=uint32 time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=general.quantization_version type=uint32 time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=llama.vocab_size type=uint32 time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.pre type=string time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:49.992Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.021Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.021Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.022Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.022Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.022Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.022Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.022Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T04:44:50.022Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.022Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T04:44:50.022Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.023Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.023Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.023Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.023Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.023Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T04:44:50.023Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.023Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T04:44:50.023Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.023Z level=ERROR source=images.go:157 msg="unknown capability" capability=unknown time=2025-10-04T04:44:50.024Z level=WARN source=manifest.go:160 msg="bad manifest name" path=host/namespace/model/.hidden time=2025-10-04T04:44:50.026Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T04:44:50.026Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T04:44:50.026Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T04:44:50.026Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=4 time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:50.026Z level=WARN source=quantization.go:145 msg="tensor cols 100 are not divisible by 32, required for Q8_0 - using fallback quantization F16" time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:50.026Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.026Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.026Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[512 2]" offset=0 time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T04:44:50.027Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.027Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:50.027Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:50.027Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=output.weight requested=Q4_K quantization=Q6_K time=2025-10-04T04:44:50.027Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=12 shape="[512 2]" offset=0 time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:627 msg=output.weight kind=14 shape="[256 4]" offset=576 time=2025-10-04T04:44:50.027Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.027Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape="[512 2]" offset=0 time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=4096 time=2025-10-04T04:44:50.027Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.027Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:50.027Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=blk.0.attn_v.weight requested=Q4_K quantization=Q6_K time=2025-10-04T04:44:50.027Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=14 shape="[512 2]" offset=0 time=2025-10-04T04:44:50.027Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=864 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.028Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.028Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[32 16 2]" offset=0 time=2025-10-04T04:44:50.028Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.028Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.028Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T04:44:50.028Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T04:44:50.028Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=8 shape="[32 16 2]" offset=0 time=2025-10-04T04:44:50.028Z level=DEBUG source=gguf.go:627 msg=output.weight kind=8 shape="[256 4]" offset=1088 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.028Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.029Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.030Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.031Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.032Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.033Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.034Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:50.034Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.034Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.034Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.034Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.034Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:50.035Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.035Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.035Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.035Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.036Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.036Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.036Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.036Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.036Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.036Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.036Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.037Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.037Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.037Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.037Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.037Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T04:44:50.037Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:50.037Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.037Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.037Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.037Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.040Z level=DEBUG source=create.go:98 msg="create model from model name" from=bob resp = api.ShowResponse{License:"", Modelfile:"# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM test:latest\n\nFROM \nTEMPLATE {{ .Prompt }}\n", Parameters:"", Template:"{{ .Prompt }}", System:"", Renderer:"", Parser:"", Details:api.ModelDetails{ParentModel:"", Format:"", Family:"gptoss", Families:[]string{"gptoss"}, ParameterSize:"20.9B", QuantizationLevel:"MXFP4"}, Messages:[]api.Message(nil), RemoteModel:"bob", RemoteHost:"https://ollama.com:11434", ModelInfo:map[string]interface {}{"general.architecture":"gptoss", "gptoss.context_length":131072, "gptoss.embedding_length":2880}, ProjectorInfo:map[string]interface {}(nil), Tensors:[]api.Tensor(nil), Capabilities:[]model.Capability{"completion", "tools", "thinking"}, ModifiedAt:time.Date(2025, time.October, 4, 4, 44, 50, 40089115, time.UTC)} time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.041Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.041Z level=DEBUG source=gguf.go:578 msg=tokenizer.chat_template type=string time=2025-10-04T04:44:50.041Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.041Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.041Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.041Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.041Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.041Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.041Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.049Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.049Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.049Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.050Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.051Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.051Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.051Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.051Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.051Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.052Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.052Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.052Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.052Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.052Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.053Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-04T04:44:50.053Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-10-04T04:44:50.053Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so* time=2025-10-04T04:44:50.053Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build1543469610/b001/libcuda.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-10-04T04:44:50.054Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T04:44:50.054Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcudart.so* time=2025-10-04T04:44:50.054Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build1543469610/b001/libcudart.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcudart.so* /tmp/go-build1543469610/b001/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-10-04T04:44:50.054Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T04:44:50.054Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-10-04T04:44:50.054Z level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-04T04:44:50.054Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T04:44:50.054Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.055Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.057Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.057Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.057Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.058Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.058Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.060Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.060Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.060Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.062Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.062Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.062Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.064Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.064Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.065Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.065Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.067Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.067Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.069Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.069Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.069Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.070Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.071Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.072Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.072Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.074Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.074Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2783850468/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.076Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.076Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.076Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.076Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.077Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.077Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.077Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.079Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.079Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.079Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.081Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.081Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.081Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.083Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.083Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.083Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.085Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.085Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.086Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.086Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.086Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.088Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.088Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.088Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.090Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.090Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.090Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.091Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.091Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.091Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.094Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.094Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.094Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly4164419666/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.095Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.095Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.095Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.097Z level=DEBUG source=manifest.go:53 msg="layer does not exist" digest=sha256:776957f9c9239232f060e29d642d8f5ef3bb931f485c27a13ae6385515fb425c time=2025-10-04T04:44:50.097Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.097Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.097Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.097Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.097Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.097Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.097Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.097Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.097Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.097Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.097Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.098Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.098Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.098Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.098Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.098Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.098Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.099Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.099Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T04:44:50.099Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.099Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.099Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.099Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T04:44:50.099Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T04:44:50.099Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.099Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.099Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.101Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.101Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.101Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat762958674/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.103Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.103Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.103Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat762958674/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.104Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:50.104Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.104Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.105Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.105Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.105Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat762958674/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.107Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.107Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.107Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat762958674/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.109Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.109Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.109Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat762958674/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.112Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.112Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.112Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat762958674/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.113Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.113Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.113Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat762958674/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.145Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.145Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.145Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.145Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.145Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.145Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.145Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.145Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.145Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.145Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.145Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.146Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.146Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.146Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.146Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.146Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.146Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.146Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.147Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.147Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T04:44:50.147Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.147Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.147Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.148Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T04:44:50.148Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T04:44:50.148Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.148Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.148Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.149Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.149Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.149Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1936139278/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.150Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.150Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.150Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1936139278/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.152Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:50.152Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.152Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.153Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.153Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.153Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1936139278/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.155Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.155Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.155Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1936139278/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.157Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.157Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.157Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1936139278/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.158Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:50.158Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.158Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.159Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.159Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.159Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1936139278/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.161Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.161Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.161Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1936139278/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.163Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.163Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.163Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1936139278/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.164Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.164Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.164Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.164Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.164Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.164Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.164Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.165Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.165Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.165Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.165Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.165Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.165Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.165Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.166Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.166Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.166Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag3908728408/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.167Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.167Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.167Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag3908728408/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.170Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.170Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.170Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag3908728408/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.171Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.171Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.171Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag3908728408/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.173Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.173Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.173Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag3908728408/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:50.215Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.215Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.215Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.215Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.215Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.216Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.216Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.216Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.216Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.216Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.216Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.216Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.217Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.217Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.217Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimecontent_streams_as_it_arrives3093152144/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.308Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.308Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.308Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.308Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.308Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.308Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.308Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.308Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.309Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.309Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.309Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.309Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.309Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.309Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.309Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.309Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.310Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.310Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.310Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimethinking_streams_separately_from_content3737502120/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.431Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.431Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.431Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.431Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.431Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.432Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.432Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.432Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.432Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.432Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.432Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.432Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.433Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.433Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.433Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimepartial_tags_buffer_until_complete3019318612/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.584Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.584Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.584Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.584Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.584Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.584Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.585Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.587Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.587Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimesimple_assistant_after_analysis4178906202/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.618Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.618Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.618Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.618Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.618Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.619Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.619Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_parsed_and_returned_correctly3292971553/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.650Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.650Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.650Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.650Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.650Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.652Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.652Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.652Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_with_streaming_JSON_across_chunks3342510352/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.742Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.742Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.743Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.743Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.743Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.744Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.744Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.744Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.744Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.744Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.744Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.744Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.745Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.745Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.745Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingSimple1969278924/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.746Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.746Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.746Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.746Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.746Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.746Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.746Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.746Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.746Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.746Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.746Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.746Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.747Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.747Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.747Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingsimple_message_without_thinking2826649680/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.747Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.747Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.747Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.747Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.747Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.748Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.748Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.748Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.748Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.748Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.748Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.748Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.748Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.748Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.748Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.748Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.749Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.749Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.750Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingmessage_with_analysis_channel_for_thinking3732309760/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.750Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.750Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.750Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:50.750Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:50.751Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.751Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.751Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:50.751Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:50.751Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.751Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.751Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.751Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.752Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingstreaming_with_partial_tags_across_boundaries1770302835/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:50.752Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.752Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.753Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.755Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.756Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.756Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.756Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.756Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 04:44:50 | 200 | 20.61µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/04 - 04:44:50 | 200 | 46.079µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/04 - 04:44:50 | 200 | 90.204µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 04:44:50 | 200 | 180.636µs | 127.0.0.1 | GET "/api/tags" time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.760Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.760Z level=INFO source=images.go:518 msg="total blobs: 3" time=2025-10-04T04:44:50.760Z level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-04T04:44:50.760Z level=INFO source=server.go:164 msg=http status=200 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:59810 proto=HTTP/1.1 query="" time=2025-10-04T04:44:50.761Z level=WARN source=server.go:164 msg=http error="model not found" status=404 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:59810 proto=HTTP/1.1 query="" [GIN] 2025/10/04 - 04:44:50 | 200 | 164.865µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 04:44:50 | 200 | 399.658µs | 127.0.0.1 | POST "/api/create" time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 04:44:50 | 200 | 335.596µs | 127.0.0.1 | POST "/api/copy" time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.764Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 04:44:50 | 200 | 380.871µs | 127.0.0.1 | POST "/api/show" time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 04:44:50 | 200 | 403.994µs | 127.0.0.1 | GET "/v1/models/show-model" [GIN] 2025/10/04 - 04:44:50 | 405 | 1.097µs | 127.0.0.1 | GET "/api/show" time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.766Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.769Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.769Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.769Z level=DEBUG source=gguf.go:578 msg=general.type type=string time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.block_count default=0 time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.vision.block_count default=0 time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.block_count default=0 time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.vision.block_count default=0 time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:50.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.770Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T04:44:50.770Z level=INFO source=sched.go:417 msg="NewLlamaServer failed" model=foo error="something failed to load model blah: this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull `" time=2025-10-04T04:44:50.771Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T04:44:50.771Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:50.771Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:50.771Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open dummy_model_path: no such file or directory" time=2025-10-04T04:44:50.771Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T04:44:50.771Z level=ERROR source=sched.go:476 msg="error loading llama server" error="wait failure" time=2025-10-04T04:44:50.771Z level=DEBUG source=sched.go:478 msg="triggering expiration for failed load" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=dummy_model_path runner.num_ctx=4096 time=2025-10-04T04:44:50.772Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.772Z level=INFO source=sched_test.go:179 msg=a time=2025-10-04T04:44:50.772Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.772Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSameModelSameRequest890265740/002/2349901989 time=2025-10-04T04:44:50.772Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:50.772Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest890265740/002/2349901989 runner.num_ctx=4096 time=2025-10-04T04:44:50.772Z level=INFO source=sched_test.go:196 msg=b time=2025-10-04T04:44:50.772Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSameModelSameRequest890265740/002/2349901989 time=2025-10-04T04:44:50.772Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.772Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.772Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.772Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest890265740/002/2349901989 runner.num_ctx=4096 time=2025-10-04T04:44:50.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.772Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.773Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.774Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.774Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.774Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.774Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.774Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.774Z level=INFO source=sched_test.go:223 msg=a time=2025-10-04T04:44:50.774Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.774Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.774Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 time=2025-10-04T04:44:50.774Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:50.774Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 time=2025-10-04T04:44:50.774Z level=INFO source=sched_test.go:241 msg=b time=2025-10-04T04:44:50.774Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 time=2025-10-04T04:44:50.774Z level=DEBUG source=sched.go:154 msg=reloading runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 time=2025-10-04T04:44:50.774Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 refCount=1 time=2025-10-04T04:44:50.774Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 time=2025-10-04T04:44:50.775Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 time=2025-10-04T04:44:50.775Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:50.775Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="20 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel4236497429/002/1808592875 runner.num_ctx=4096 time=2025-10-04T04:44:50.775Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.775Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.775Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.775Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.776Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=INFO source=sched_test.go:274 msg=a time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 time=2025-10-04T04:44:50.776Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 time=2025-10-04T04:44:50.776Z level=INFO source=sched_test.go:293 msg=b time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T04:44:50.776Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T04:44:50.776Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 time=2025-10-04T04:44:50.776Z level=INFO source=sched_test.go:311 msg=c time=2025-10-04T04:44:50.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=cpu available="24.2 GiB" time=2025-10-04T04:44:50.776Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=cpu total="29.8 GiB" available="19.6 GiB" time=2025-10-04T04:44:50.776Z level=INFO source=sched.go:470 msg="loaded runners" count=3 time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T04:44:50.776Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-4a runner.inference=cpu runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/006/1078822427 runner.num_ctx=4096 time=2025-10-04T04:44:50.777Z level=INFO source=sched_test.go:329 msg=d time=2025-10-04T04:44:50.777Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.777Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 duration=5ms time=2025-10-04T04:44:50.777Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:162 msg="max runners achieved, unloading one to make room" runner_count=3 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 runner.num_ctx=4096 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/002/1732743654 time=2025-10-04T04:44:50.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T04:44:50.779Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="3.7 GiB" time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 refCount=1 time=2025-10-04T04:44:50.779Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 runner.num_ctx=4096 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/004/342195448 time=2025-10-04T04:44:50.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T04:44:50.785Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T04:44:50.785Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3c runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels3958161050/008/2628125538 runner.num_ctx=4096 time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.785Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.785Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.786Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.786Z level=INFO source=sched_test.go:367 msg=a time=2025-10-04T04:44:50.786Z level=INFO source=sched_test.go:370 msg=b time=2025-10-04T04:44:50.786Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.786Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGetRunner3719226381/002/235848018 time=2025-10-04T04:44:50.786Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:50.786Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 runner.num_ctx=4096 time=2025-10-04T04:44:50.786Z level=INFO source=sched_test.go:394 msg=c time=2025-10-04T04:44:50.786Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open bad path: no such file or directory" time=2025-10-04T04:44:50.786Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.786Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 runner.num_ctx=4096 duration=2ms time=2025-10-04T04:44:50.786Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:50.789Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 runner.num_ctx=4096 time=2025-10-04T04:44:50.789Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 runner.num_ctx=4096 time=2025-10-04T04:44:50.789Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 runner.num_ctx=4096 time=2025-10-04T04:44:50.789Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 runner.num_ctx=4096 time=2025-10-04T04:44:50.789Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 runner.num_ctx=4096 time=2025-10-04T04:44:50.789Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 time=2025-10-04T04:44:50.789Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3719226381/002/235848018 time=2025-10-04T04:44:50.789Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.837Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T04:44:50.837Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T04:44:50.837Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T04:44:50.857Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.857Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.857Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.857Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.857Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.858Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.858Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.858Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestPrematureExpired1382161211/002/4203197758 time=2025-10-04T04:44:50.858Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:50.858Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 runner.num_ctx=4096 time=2025-10-04T04:44:50.858Z level=INFO source=sched_test.go:481 msg="sending premature expired event now" time=2025-10-04T04:44:50.858Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 runner.num_ctx=4096 time=2025-10-04T04:44:50.858Z level=DEBUG source=sched.go:310 msg="expired event with positive ref count, retrying" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 runner.num_ctx=4096 refCount=1 time=2025-10-04T04:44:50.863Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:50.863Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 runner.num_ctx=4096 duration=5ms time=2025-10-04T04:44:50.863Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:50.868Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 runner.num_ctx=4096 time=2025-10-04T04:44:50.868Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 runner.num_ctx=4096 time=2025-10-04T04:44:50.868Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 runner.num_ctx=4096 time=2025-10-04T04:44:50.868Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 runner.num_ctx=4096 time=2025-10-04T04:44:50.868Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 time=2025-10-04T04:44:50.868Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1382161211/002/4203197758 time=2025-10-04T04:44:50.868Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T04:44:50.894Z level=ERROR source=sched.go:272 msg="finished request signal received after model unloaded" modelPath=/tmp/TestPrematureExpired1382161211/002/4203197758 time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=1 library=a available="900 B" time=2025-10-04T04:44:50.899Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=1 library=a total="1000 B" available="825 B" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=2 library=a available="1.9 KiB" time=2025-10-04T04:44:50.899Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=2 library=a total="2.0 KiB" available="1.8 KiB" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:50.899Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=b time=2025-10-04T04:44:50.900Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=a time=2025-10-04T04:44:50.900Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:50.900Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:50.900Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:50.900Z level=INFO source=sched_test.go:669 msg=scenario1a time=2025-10-04T04:44:50.900Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:50.900Z level=DEBUG source=sched.go:142 msg="pending request cancelled or timed out, skipping scheduling" PASS ok github.com/ollama/ollama/server 0.932s github.com/ollama/ollama/server time=2025-10-04T04:44:52.017Z level=INFO source=logging.go:32 msg="ollama app started" time=2025-10-04T04:44:52.018Z level=DEBUG source=convert.go:232 msg="vocabulary is smaller than expected, padding with dummy tokens" expect=32000 actual=1 time=2025-10-04T04:44:52.024Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=general.file_type type=uint32 time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=general.quantization_version type=uint32 time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=llama.vocab_size type=uint32 time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.pre type=string time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.024Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.052Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.052Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.054Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.054Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.054Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.054Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.054Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T04:44:52.054Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.054Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T04:44:52.054Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.055Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.055Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.055Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.055Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.055Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T04:44:52.055Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.055Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T04:44:52.055Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.055Z level=ERROR source=images.go:157 msg="unknown capability" capability=unknown time=2025-10-04T04:44:52.056Z level=WARN source=manifest.go:160 msg="bad manifest name" path=host/namespace/model/.hidden time=2025-10-04T04:44:52.057Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T04:44:52.057Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T04:44:52.058Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T04:44:52.058Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=4 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:52.058Z level=WARN source=quantization.go:145 msg="tensor cols 100 are not divisible by 32, required for Q8_0 - using fallback quantization F16" time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.058Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.058Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[512 2]" offset=0 time=2025-10-04T04:44:52.058Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.058Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:52.059Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:52.059Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=output.weight requested=Q4_K quantization=Q6_K time=2025-10-04T04:44:52.059Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.059Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.059Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T04:44:52.059Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T04:44:52.059Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=12 shape="[512 2]" offset=0 time=2025-10-04T04:44:52.059Z level=DEBUG source=gguf.go:627 msg=output.weight kind=14 shape="[256 4]" offset=576 time=2025-10-04T04:44:52.059Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.060Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.060Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.060Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape="[512 2]" offset=0 time=2025-10-04T04:44:52.060Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=4096 time=2025-10-04T04:44:52.061Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.061Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:52.061Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=blk.0.attn_v.weight requested=Q4_K quantization=Q6_K time=2025-10-04T04:44:52.061Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.061Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.061Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T04:44:52.061Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T04:44:52.061Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=14 shape="[512 2]" offset=0 time=2025-10-04T04:44:52.061Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=864 time=2025-10-04T04:44:52.061Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.061Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.061Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.061Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[32 16 2]" offset=0 time=2025-10-04T04:44:52.061Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T04:44:52.062Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.062Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:52.062Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T04:44:52.062Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.062Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.062Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T04:44:52.062Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T04:44:52.062Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=8 shape="[32 16 2]" offset=0 time=2025-10-04T04:44:52.062Z level=DEBUG source=gguf.go:627 msg=output.weight kind=8 shape="[256 4]" offset=1088 time=2025-10-04T04:44:52.062Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.062Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.064Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.065Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.066Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.066Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.066Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.066Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.068Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:52.068Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.069Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.069Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.069Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.069Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:52.069Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.069Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.069Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.069Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.070Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.070Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.071Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T04:44:52.072Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.073Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.074Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.074Z level=DEBUG source=create.go:98 msg="create model from model name" from=bob resp = api.ShowResponse{License:"", Modelfile:"# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM test:latest\n\nFROM \nTEMPLATE {{ .Prompt }}\n", Parameters:"", Template:"{{ .Prompt }}", System:"", Renderer:"", Parser:"", Details:api.ModelDetails{ParentModel:"", Format:"", Family:"gptoss", Families:[]string{"gptoss"}, ParameterSize:"20.9B", QuantizationLevel:"MXFP4"}, Messages:[]api.Message(nil), RemoteModel:"bob", RemoteHost:"https://ollama.com:11434", ModelInfo:map[string]interface {}{"general.architecture":"gptoss", "gptoss.context_length":131072, "gptoss.embedding_length":2880}, ProjectorInfo:map[string]interface {}(nil), Tensors:[]api.Tensor(nil), Capabilities:[]model.Capability{"completion", "tools", "thinking"}, ModifiedAt:time.Date(2025, time.October, 4, 4, 44, 52, 74749757, time.UTC)} time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.075Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.075Z level=DEBUG source=gguf.go:578 msg=tokenizer.chat_template type=string time=2025-10-04T04:44:52.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.076Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.084Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.084Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.084Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.085Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.086Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.086Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.086Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.086Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.086Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.086Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.087Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.087Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.088Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-04T04:44:52.088Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-10-04T04:44:52.088Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so* time=2025-10-04T04:44:52.088Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build966480348/b001/libcuda.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-10-04T04:44:52.089Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T04:44:52.089Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcudart.so* time=2025-10-04T04:44:52.089Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build966480348/b001/libcudart.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcudart.so* /tmp/go-build966480348/b001/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-10-04T04:44:52.089Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T04:44:52.089Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-10-04T04:44:52.089Z level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-04T04:44:52.089Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T04:44:52.089Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.089Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.091Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.091Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.091Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.092Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.092Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.092Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.094Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.7 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.094Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.094Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.096Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.7 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.096Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.096Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.098Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.098Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.098Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.100Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.100Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.100Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.101Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.101Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.101Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.103Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.103Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.103Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.104Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.104Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.104Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.106Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.106Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.107Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.108Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.108Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.108Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1851179208/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.109Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.109Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.110Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.110Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.110Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.110Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.110Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.110Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.110Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.110Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.111Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.111Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.111Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.113Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.113Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.113Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.115Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.115Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.115Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.116Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.116Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.116Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.118Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.118Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.118Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.120Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.120Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.120Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.122Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.122Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.122Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.123Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.123Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.123Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.125Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.125Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.125Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.127Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.127Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.127Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1802303522/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.128Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.128Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.129Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.131Z level=DEBUG source=manifest.go:53 msg="layer does not exist" digest=sha256:776957f9c9239232f060e29d642d8f5ef3bb931f485c27a13ae6385515fb425c time=2025-10-04T04:44:52.131Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.131Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.131Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.131Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.131Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.131Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.132Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.132Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.132Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.132Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T04:44:52.132Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.133Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.133Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.133Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T04:44:52.133Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T04:44:52.133Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.133Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.133Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.134Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.134Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.134Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3252678795/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.136Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.136Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.136Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3252678795/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.137Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:52.138Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.138Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.138Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.138Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.138Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3252678795/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.140Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.140Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.140Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3252678795/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.142Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.142Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.142Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3252678795/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.145Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.145Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.145Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3252678795/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.147Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.147Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.147Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3252678795/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.178Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.178Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.179Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.179Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.179Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.179Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.179Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.179Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.179Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.179Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.180Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T04:44:52.180Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T04:44:52.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T04:44:52.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.182Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.182Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.182Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate12790246/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.184Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.184Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate12790246/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.185Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:52.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.186Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.186Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate12790246/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.188Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.188Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate12790246/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.190Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.190Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate12790246/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.191Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T04:44:52.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.192Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.193Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.193Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate12790246/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.194Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.194Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate12790246/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.196Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.196Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate12790246/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.197Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.197Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.198Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.198Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.200Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.200Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag180164866/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.202Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.202Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag180164866/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.203Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.204Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag180164866/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.205Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.205Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.205Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag180164866/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.207Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.207Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag180164866/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T04:44:52.249Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.250Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.250Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.250Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.251Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.251Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.251Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.251Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.251Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.251Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.251Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.252Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.253Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.253Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimecontent_streams_as_it_arrives197460495/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.343Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.344Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.344Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.344Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.344Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.344Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.344Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.346Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.346Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.346Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimethinking_streams_separately_from_content700831362/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.467Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.467Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.467Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.467Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.467Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.468Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.468Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.468Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.468Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.468Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.468Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.468Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.470Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.470Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.470Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimepartial_tags_buffer_until_complete937788689/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.621Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.621Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.621Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.621Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.621Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.622Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.622Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.622Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.622Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.622Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.623Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.623Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimesimple_assistant_after_analysis1079608148/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.654Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.654Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.654Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.654Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.654Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.654Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.655Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.655Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.655Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.655Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.655Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.655Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.656Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.656Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.656Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_parsed_and_returned_correctly3821474695/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.687Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.687Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.687Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.687Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.687Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.688Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.688Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.688Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.688Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.688Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.688Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.688Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.689Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.689Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.689Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_with_streaming_JSON_across_chunks2785346337/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.780Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.780Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.780Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.780Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.783Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.783Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingSimple1789433872/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.783Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.783Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.783Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.783Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.784Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.784Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingsimple_message_without_thinking588316796/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.785Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.785Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.785Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.785Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.787Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.787Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.787Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingmessage_with_analysis_channel_for_thinking1271739801/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.788Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.788Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.788Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.788Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T04:44:52.788Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T04:44:52.788Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.788Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.788Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T04:44:52.788Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T04:44:52.788Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.788Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.788Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.789Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="3.8 GiB" before.free_swap="144.0 GiB" now.total="7.6 GiB" now.free="3.8 GiB" now.free_swap="144.0 GiB" time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.789Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingstreaming_with_partial_tags_across_boundaries3696529484/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T04:44:52.789Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.789Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.793Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 04:44:52 | 200 | 22.53µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/04 - 04:44:52 | 200 | 46.52µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/04 - 04:44:52 | 200 | 95.059µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.795Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 04:44:52 | 200 | 128.418µs | 127.0.0.1 | GET "/api/tags" time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.796Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.797Z level=INFO source=images.go:518 msg="total blobs: 3" time=2025-10-04T04:44:52.797Z level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-04T04:44:52.797Z level=INFO source=server.go:164 msg=http status=200 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:33298 proto=HTTP/1.1 query="" time=2025-10-04T04:44:52.797Z level=WARN source=server.go:164 msg=http error="model not found" status=404 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:33298 proto=HTTP/1.1 query="" [GIN] 2025/10/04 - 04:44:52 | 200 | 199.285µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.798Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 04:44:52 | 200 | 642.55µs | 127.0.0.1 | POST "/api/create" time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.799Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 04:44:52 | 200 | 287.749µs | 127.0.0.1 | POST "/api/copy" time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 04:44:52 | 200 | 354.755µs | 127.0.0.1 | POST "/api/show" time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 04:44:52 | 200 | 504.544µs | 127.0.0.1 | GET "/v1/models/show-model" [GIN] 2025/10/04 - 04:44:52 | 405 | 666ns | 127.0.0.1 | GET "/api/show" time=2025-10-04T04:44:52.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T04:44:52.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.806Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.806Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.806Z level=DEBUG source=gguf.go:578 msg=general.type type=string time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.block_count default=0 time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.vision.block_count default=0 time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.block_count default=0 time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.vision.block_count default=0 time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T04:44:52.807Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.807Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.807Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T04:44:52.807Z level=INFO source=sched.go:417 msg="NewLlamaServer failed" model=foo error="something failed to load model blah: this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull `" time=2025-10-04T04:44:52.807Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T04:44:52.807Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:52.807Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:52.807Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open dummy_model_path: no such file or directory" time=2025-10-04T04:44:52.807Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T04:44:52.807Z level=ERROR source=sched.go:476 msg="error loading llama server" error="wait failure" time=2025-10-04T04:44:52.807Z level=DEBUG source=sched.go:478 msg="triggering expiration for failed load" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=dummy_model_path runner.num_ctx=4096 time=2025-10-04T04:44:52.808Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.808Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.809Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.809Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.809Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.809Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.809Z level=INFO source=sched_test.go:179 msg=a time=2025-10-04T04:44:52.809Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.809Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.809Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSameModelSameRequest1465355934/002/816540196 time=2025-10-04T04:44:52.809Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:52.809Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest1465355934/002/816540196 runner.num_ctx=4096 time=2025-10-04T04:44:52.809Z level=INFO source=sched_test.go:196 msg=b time=2025-10-04T04:44:52.809Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSameModelSameRequest1465355934/002/816540196 time=2025-10-04T04:44:52.809Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.809Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.809Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.809Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest1465355934/002/816540196 runner.num_ctx=4096 time=2025-10-04T04:44:52.809Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.810Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.810Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.810Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.810Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.810Z level=INFO source=sched_test.go:223 msg=a time=2025-10-04T04:44:52.810Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.810Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.810Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 time=2025-10-04T04:44:52.810Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:52.810Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 time=2025-10-04T04:44:52.810Z level=INFO source=sched_test.go:241 msg=b time=2025-10-04T04:44:52.810Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 time=2025-10-04T04:44:52.810Z level=DEBUG source=sched.go:154 msg=reloading runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 time=2025-10-04T04:44:52.810Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 refCount=1 time=2025-10-04T04:44:52.810Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 time=2025-10-04T04:44:52.811Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 time=2025-10-04T04:44:52.811Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="20 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2455336314/002/548680799 runner.num_ctx=4096 time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.811Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.812Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.813Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.813Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.813Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.813Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.813Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.814Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.814Z level=INFO source=sched_test.go:274 msg=a time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 time=2025-10-04T04:44:52.814Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 time=2025-10-04T04:44:52.814Z level=INFO source=sched_test.go:293 msg=b time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T04:44:52.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T04:44:52.814Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T04:44:52.814Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 time=2025-10-04T04:44:52.814Z level=INFO source=sched_test.go:311 msg=c time=2025-10-04T04:44:52.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=cpu available="24.2 GiB" time=2025-10-04T04:44:52.814Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=cpu total="29.8 GiB" available="19.6 GiB" time=2025-10-04T04:44:52.814Z level=INFO source=sched.go:470 msg="loaded runners" count=3 time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-4a runner.inference=cpu runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/006/3097354594 runner.num_ctx=4096 time=2025-10-04T04:44:52.814Z level=INFO source=sched_test.go:329 msg=d time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 duration=5ms time=2025-10-04T04:44:52.814Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:52.816Z level=DEBUG source=sched.go:162 msg="max runners achieved, unloading one to make room" runner_count=3 time=2025-10-04T04:44:52.816Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 time=2025-10-04T04:44:52.816Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:52.816Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 time=2025-10-04T04:44:52.816Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 time=2025-10-04T04:44:52.816Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 time=2025-10-04T04:44:52.816Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 time=2025-10-04T04:44:52.816Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 runner.num_ctx=4096 time=2025-10-04T04:44:52.817Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 time=2025-10-04T04:44:52.817Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 time=2025-10-04T04:44:52.817Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/002/1516937089 time=2025-10-04T04:44:52.817Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.817Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T04:44:52.817Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="3.7 GiB" time=2025-10-04T04:44:52.817Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 time=2025-10-04T04:44:52.817Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 refCount=1 time=2025-10-04T04:44:52.817Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 time=2025-10-04T04:44:52.822Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 runner.num_ctx=4096 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/004/3410433509 time=2025-10-04T04:44:52.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T04:44:52.823Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T04:44:52.823Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3c runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1838176026/008/1659826604 runner.num_ctx=4096 time=2025-10-04T04:44:52.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.823Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.823Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.823Z level=INFO source=sched_test.go:367 msg=a time=2025-10-04T04:44:52.823Z level=INFO source=sched_test.go:370 msg=b time=2025-10-04T04:44:52.824Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.824Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGetRunner3968991002/002/3518218733 time=2025-10-04T04:44:52.824Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:52.824Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 runner.num_ctx=4096 time=2025-10-04T04:44:52.824Z level=INFO source=sched_test.go:394 msg=c time=2025-10-04T04:44:52.824Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open bad path: no such file or directory" time=2025-10-04T04:44:52.824Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.824Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 runner.num_ctx=4096 duration=2ms time=2025-10-04T04:44:52.824Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:52.826Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 runner.num_ctx=4096 time=2025-10-04T04:44:52.826Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 runner.num_ctx=4096 time=2025-10-04T04:44:52.826Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 runner.num_ctx=4096 time=2025-10-04T04:44:52.826Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 runner.num_ctx=4096 time=2025-10-04T04:44:52.826Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 runner.num_ctx=4096 time=2025-10-04T04:44:52.826Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 time=2025-10-04T04:44:52.826Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3968991002/002/3518218733 time=2025-10-04T04:44:52.826Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.874Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T04:44:52.874Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T04:44:52.874Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T04:44:52.894Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.895Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.895Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.895Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.895Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.895Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.895Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.895Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestPrematureExpired4044122807/002/1283982351 time=2025-10-04T04:44:52.895Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T04:44:52.895Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 time=2025-10-04T04:44:52.895Z level=INFO source=sched_test.go:481 msg="sending premature expired event now" time=2025-10-04T04:44:52.895Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 time=2025-10-04T04:44:52.896Z level=DEBUG source=sched.go:310 msg="expired event with positive ref count, retrying" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 refCount=1 time=2025-10-04T04:44:52.901Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T04:44:52.901Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 duration=5ms time=2025-10-04T04:44:52.901Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 refCount=0 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 runner.num_ctx=4096 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 time=2025-10-04T04:44:52.906Z level=DEBUG source=sched.go:332 msg="duplicate expired event, ignoring" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired4044122807/002/1283982351 time=2025-10-04T04:44:52.931Z level=ERROR source=sched.go:272 msg="finished request signal received after model unloaded" modelPath=/tmp/TestPrematureExpired4044122807/002/1283982351 time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=1 library=a available="900 B" time=2025-10-04T04:44:52.937Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=1 library=a total="1000 B" available="825 B" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=2 library=a available="1.9 KiB" time=2025-10-04T04:44:52.937Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=2 library=a total="2.0 KiB" available="1.8 KiB" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=b time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=a time=2025-10-04T04:44:52.937Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T04:44:52.937Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T04:44:52.937Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T04:44:52.937Z level=INFO source=sched_test.go:669 msg=scenario1a time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T04:44:52.937Z level=DEBUG source=sched.go:142 msg="pending request cancelled or timed out, skipping scheduling" time=2025-10-04T04:44:52.943Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T04:44:52.943Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" PASS ok github.com/ollama/ollama/server 0.939s github.com/ollama/ollama/server/internal/cache/blob PASS ok github.com/ollama/ollama/server/internal/cache/blob 0.005s github.com/ollama/ollama/server/internal/cache/blob PASS ok github.com/ollama/ollama/server/internal/cache/blob 0.005s github.com/ollama/ollama/server/internal/client/ollama 2025/10/04 04:44:53 http: TLS handshake error from 127.0.0.1:50664: remote error: tls: bad certificate PASS ok github.com/ollama/ollama/server/internal/client/ollama 0.146s github.com/ollama/ollama/server/internal/client/ollama 2025/10/04 04:44:54 http: TLS handshake error from 127.0.0.1:46328: remote error: tls: bad certificate PASS ok github.com/ollama/ollama/server/internal/client/ollama 0.147s github.com/ollama/ollama/server/internal/internal/backoff ? github.com/ollama/ollama/server/internal/internal/backoff [no test files] github.com/ollama/ollama/server/internal/internal/names PASS ok github.com/ollama/ollama/server/internal/internal/names 0.003s github.com/ollama/ollama/server/internal/internal/names PASS ok github.com/ollama/ollama/server/internal/internal/names 0.002s github.com/ollama/ollama/server/internal/internal/stringsx PASS ok github.com/ollama/ollama/server/internal/internal/stringsx 0.003s github.com/ollama/ollama/server/internal/internal/stringsx PASS ok github.com/ollama/ollama/server/internal/internal/stringsx 0.003s github.com/ollama/ollama/server/internal/internal/syncs ? github.com/ollama/ollama/server/internal/internal/syncs [no test files] github.com/ollama/ollama/server/internal/manifest ? github.com/ollama/ollama/server/internal/manifest [no test files] github.com/ollama/ollama/server/internal/registry 2025/10/04 04:44:55 http: TLS handshake error from 127.0.0.1:35866: write tcp 127.0.0.1:37015->127.0.0.1:35866: use of closed network connection PASS ok github.com/ollama/ollama/server/internal/registry 0.009s github.com/ollama/ollama/server/internal/registry 2025/10/04 04:44:55 http: TLS handshake error from 127.0.0.1:53442: write tcp 127.0.0.1:37589->127.0.0.1:53442: use of closed network connection PASS ok github.com/ollama/ollama/server/internal/registry 0.008s github.com/ollama/ollama/server/internal/testutil ? github.com/ollama/ollama/server/internal/testutil [no test files] github.com/ollama/ollama/template PASS ok github.com/ollama/ollama/template 0.554s github.com/ollama/ollama/template PASS ok github.com/ollama/ollama/template 0.557s github.com/ollama/ollama/thinking PASS ok github.com/ollama/ollama/thinking 0.002s github.com/ollama/ollama/thinking PASS ok github.com/ollama/ollama/thinking 0.002s github.com/ollama/ollama/tools PASS ok github.com/ollama/ollama/tools 0.006s github.com/ollama/ollama/tools PASS ok github.com/ollama/ollama/tools 0.005s github.com/ollama/ollama/types/errtypes ? github.com/ollama/ollama/types/errtypes [no test files] github.com/ollama/ollama/types/model PASS ok github.com/ollama/ollama/types/model 0.003s github.com/ollama/ollama/types/model PASS ok github.com/ollama/ollama/types/model 0.003s github.com/ollama/ollama/types/syncmap ? github.com/ollama/ollama/types/syncmap [no test files] github.com/ollama/ollama/version ? github.com/ollama/ollama/version [no test files] + RPM_EC=0 ++ jobs -p + exit 0 Processing files: ollama-0.12.3-1.fc41.x86_64 Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.voog7d + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + DOCDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export DOCDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/docs /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/CONTRIBUTING.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/README.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/SECURITY.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + RPM_EC=0 ++ jobs -p + exit 0 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.7Rvq7M + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/vendor/modules.txt /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + RPM_EC=0 ++ jobs -p + exit 0 warning: File listed twice: /usr/share/licenses/ollama Provides: bundled(golang(github.com/agnivade/levenshtein)) = 1.1.1 bundled(golang(github.com/apache/arrow/go/arrow)) = bc21918 bundled(golang(github.com/bytedance/sonic)) = 1.11.6 bundled(golang(github.com/bytedance/sonic/loader)) = 0.1.1 bundled(golang(github.com/chewxy/hm)) = 1.0.0 bundled(golang(github.com/chewxy/math32)) = 1.11.0 bundled(golang(github.com/cloudwego/base64x)) = 0.1.4 bundled(golang(github.com/cloudwego/iasm)) = 0.2.0 bundled(golang(github.com/containerd/console)) = 1.0.3 bundled(golang(github.com/d4l3k/go-bfloat16)) = 690c3bd bundled(golang(github.com/davecgh/go-spew)) = 1.1.1 bundled(golang(github.com/dlclark/regexp2)) = 1.11.4 bundled(golang(github.com/emirpasic/gods/v2)) = 2.0.0_alpha bundled(golang(github.com/gabriel-vasile/mimetype)) = 1.4.3 bundled(golang(github.com/gin-contrib/cors)) = 1.7.2 bundled(golang(github.com/gin-contrib/sse)) = 0.1.0 bundled(golang(github.com/gin-gonic/gin)) = 1.10.0 bundled(golang(github.com/go-playground/locales)) = 0.14.1 bundled(golang(github.com/go-playground/universal-translator)) = 0.18.1 bundled(golang(github.com/go-playground/validator/v10)) = 10.20.0 bundled(golang(github.com/goccy/go-json)) = 0.10.2 bundled(golang(github.com/gogo/protobuf)) = 1.3.2 bundled(golang(github.com/golang/protobuf)) = 1.5.4 bundled(golang(github.com/google/flatbuffers)) = 24.3.25+incompatible bundled(golang(github.com/google/go-cmp)) = 0.7.0 bundled(golang(github.com/google/uuid)) = 1.6.0 bundled(golang(github.com/inconshreveable/mousetrap)) = 1.1.0 bundled(golang(github.com/json-iterator/go)) = 1.1.12 bundled(golang(github.com/klauspost/cpuid/v2)) = 2.2.7 bundled(golang(github.com/kr/text)) = 0.2.0 bundled(golang(github.com/leodido/go-urn)) = 1.4.0 bundled(golang(github.com/mattn/go-isatty)) = 0.0.20 bundled(golang(github.com/mattn/go-runewidth)) = 0.0.14 bundled(golang(github.com/modern-go/concurrent)) = bacd9c7 bundled(golang(github.com/modern-go/reflect2)) = 1.0.2 bundled(golang(github.com/nlpodyssey/gopickle)) = 0.3.0 bundled(golang(github.com/olekukonko/tablewriter)) = 0.0.5 bundled(golang(github.com/pdevine/tensor)) = f88f456 bundled(golang(github.com/pelletier/go-toml/v2)) = 2.2.2 bundled(golang(github.com/pkg/errors)) = 0.9.1 bundled(golang(github.com/pmezard/go-difflib)) = 1.0.0 bundled(golang(github.com/rivo/uniseg)) = 0.2.0 bundled(golang(github.com/spf13/cobra)) = 1.7.0 bundled(golang(github.com/spf13/pflag)) = 1.0.5 bundled(golang(github.com/stretchr/testify)) = 1.9.0 bundled(golang(github.com/twitchyliquid64/golang-asm)) = 0.15.1 bundled(golang(github.com/ugorji/go/codec)) = 1.2.12 bundled(golang(github.com/x448/float16)) = 0.8.4 bundled(golang(github.com/xtgo/set)) = 1.0.0 bundled(golang(go4.org/unsafe/assume-no-moving-gc)) = b99613f bundled(golang(golang.org/x/arch)) = 0.8.0 bundled(golang(golang.org/x/crypto)) = 0.36.0 bundled(golang(golang.org/x/exp)) = aa4b98e bundled(golang(golang.org/x/image)) = 0.22.0 bundled(golang(golang.org/x/net)) = 0.38.0 bundled(golang(golang.org/x/sync)) = 0.12.0 bundled(golang(golang.org/x/sys)) = 0.31.0 bundled(golang(golang.org/x/term)) = 0.30.0 bundled(golang(golang.org/x/text)) = 0.23.0 bundled(golang(golang.org/x/tools)) = 0.30.0 bundled(golang(golang.org/x/xerrors)) = 5ec99f8 bundled(golang(gonum.org/v1/gonum)) = 0.15.0 bundled(golang(google.golang.org/protobuf)) = 1.34.1 bundled(golang(gopkg.in/yaml.v3)) = 3.0.1 bundled(golang(gorgonia.org/vecf32)) = 0.9.0 bundled(golang(gorgonia.org/vecf64)) = 0.9.0 bundled(llama-cpp) = b6121 config(ollama) = 0.12.3-1.fc41 group(ollama) group(ollama) = ZyBvbGxhbWEgLSAt ollama = 0.12.3-1.fc41 ollama(x86-64) = 0.12.3-1.fc41 user(ollama) = dSBvbGxhbWEgLSAiT2xsYW1hIiAvdmFyL2xpYi9vbGxhbWEgLQAA Requires(interp): /bin/sh /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(pre): /bin/sh shadow-utils Requires(post): /bin/sh Requires(preun): /bin/sh Requires(postun): /bin/sh Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libresolv.so.2()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.11)(64bit) libstdc++.so.6(CXXABI_1.3.13)(64bit) libstdc++.so.6(CXXABI_1.3.15)(64bit) libstdc++.so.6(CXXABI_1.3.2)(64bit) libstdc++.so.6(CXXABI_1.3.3)(64bit) libstdc++.so.6(CXXABI_1.3.5)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.14)(64bit) libstdc++.so.6(GLIBCXX_3.4.15)(64bit) libstdc++.so.6(GLIBCXX_3.4.17)(64bit) libstdc++.so.6(GLIBCXX_3.4.18)(64bit) libstdc++.so.6(GLIBCXX_3.4.19)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) libstdc++.so.6(GLIBCXX_3.4.22)(64bit) libstdc++.so.6(GLIBCXX_3.4.25)(64bit) libstdc++.so.6(GLIBCXX_3.4.26)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) libstdc++.so.6(GLIBCXX_3.4.32)(64bit) libstdc++.so.6(GLIBCXX_3.4.9)(64bit) rtld(GNU_HASH) Recommends: group(ollama) ollama-ggml user(ollama) Processing files: ollama-ggml-0.12.3-1.fc41.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.3ZfQsl + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml = 0.12.3-1.fc41 ollama-ggml(x86-64) = 0.12.3-1.fc41 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) rtld(GNU_HASH) Processing files: ollama-ggml-cpu-0.12.3-1.fc41.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.OUA4Mb + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml-cpu = 0.12.3-1.fc41 ollama-ggml-cpu(x86-64) = 0.12.3-1.fc41 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) rtld(GNU_HASH) Supplements: ollama-ggml(x86-64) Processing files: ollama-debugsource-0.12.3-1.fc41.x86_64 Provides: ollama-debugsource = 0.12.3-1.fc41 ollama-debugsource(x86-64) = 0.12.3-1.fc41 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Processing files: ollama-debuginfo-0.12.3-1.fc41.x86_64 Provides: debuginfo(build-id) = 43ffe46dcc9fbd6f13947a8404e56687c24e447f ollama-debuginfo = 0.12.3-1.fc41 ollama-debuginfo(x86-64) = 0.12.3-1.fc41 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc41 Processing files: ollama-ggml-debuginfo-0.12.3-1.fc41.x86_64 Provides: debuginfo(build-id) = 07f8f7ba9241a0015ada0fa4e2fa6cc17f06820f libggml-base.so-0.12.3-1.fc41.x86_64.debug()(64bit) ollama-ggml-debuginfo = 0.12.3-1.fc41 ollama-ggml-debuginfo(x86-64) = 0.12.3-1.fc41 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc41 Processing files: ollama-ggml-cpu-debuginfo-0.12.3-1.fc41.x86_64 Provides: debuginfo(build-id) = 347bdcc23d296081ae090b9b46aed39c77f04fcd debuginfo(build-id) = 370447765acd23fd3303fc0de30853328f36f7e8 debuginfo(build-id) = 42e6c186b5f93d8c2d9af2dce0b50d7b2090fdae debuginfo(build-id) = 47ed14abc849091b5f7eab40c3cb9e02c34dadb5 debuginfo(build-id) = 6cf175f4592563055ba423c7db17fb5e4d9fa1d6 debuginfo(build-id) = f072be9906ff66bc3dd2331f4dcf6c08b3b5610c debuginfo(build-id) = f6eb31cc3fffc2b9d24b2cb30be3c123dfed8f6c libggml-cpu-alderlake.so-0.12.3-1.fc41.x86_64.debug()(64bit) libggml-cpu-haswell.so-0.12.3-1.fc41.x86_64.debug()(64bit) libggml-cpu-icelake.so-0.12.3-1.fc41.x86_64.debug()(64bit) libggml-cpu-sandybridge.so-0.12.3-1.fc41.x86_64.debug()(64bit) libggml-cpu-skylakex.so-0.12.3-1.fc41.x86_64.debug()(64bit) libggml-cpu-sse42.so-0.12.3-1.fc41.x86_64.debug()(64bit) libggml-cpu-x64.so-0.12.3-1.fc41.x86_64.debug()(64bit) ollama-ggml-cpu-debuginfo = 0.12.3-1.fc41 ollama-ggml-cpu-debuginfo(x86-64) = 0.12.3-1.fc41 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc41 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc41.src.rpm Wrote: /builddir/build/RPMS/ollama-ggml-cpu-debuginfo-0.12.3-1.fc41.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-cpu-0.12.3-1.fc41.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-debuginfo-0.12.3-1.fc41.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-0.12.3-1.fc41.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-debugsource-0.12.3-1.fc41.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-0.12.3-1.fc41.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-debuginfo-0.12.3-1.fc41.x86_64.rpm Executing(rmbuild): /bin/sh -e /var/tmp/rpm-tmp.1wh7p6 + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + test -d /builddir/build/BUILD/ollama-0.12.3-build + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w /builddir/build/BUILD/ollama-0.12.3-build + rm -rf /builddir/build/BUILD/ollama-0.12.3-build + RPM_EC=0 ++ jobs -p + exit 0 RPM build warnings: File listed twice: /usr/share/licenses/ollama Finish: rpmbuild ollama-0.12.3-1.fc41.src.rpm Finish: build phase for ollama-0.12.3-1.fc41.src.rpm INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-41-x86_64-1759552642.334703/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names INFO: Done(/var/lib/copr-rpmbuild/results/ollama-0.12.3-1.fc41.src.rpm) Config(child) 7 minutes 58 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot Finish: run Running RPMResults tool Package info: { "packages": [ { "name": "ollama-debugsource", "epoch": null, "version": "0.12.3", "release": "1.fc41", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.3", "release": "1.fc41", "arch": "x86_64" }, { "name": "ollama-ggml-cpu", "epoch": null, "version": "0.12.3", "release": "1.fc41", "arch": "x86_64" }, { "name": "ollama-ggml", "epoch": null, "version": "0.12.3", "release": "1.fc41", "arch": "x86_64" }, { "name": "ollama-ggml-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc41", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.3", "release": "1.fc41", "arch": "src" }, { "name": "ollama-ggml-cpu-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc41", "arch": "x86_64" }, { "name": "ollama-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc41", "arch": "x86_64" } ] } RPMResults finished