Warning: Permanently added '3.237.33.40' (ED25519) to the list of known hosts. You can reproduce this build on your computer by running: sudo dnf install copr-rpmbuild /usr/bin/copr-rpmbuild --verbose --drop-resultdir --task-url https://copr.fedorainfracloud.org/backend/get-build-task/9644096-fedora-rawhide-x86_64 --chroot fedora-rawhide-x86_64 Version: 1.6 PID: 8552 Logging PID: 8554 Task: {'allow_user_ssh': False, 'appstream': False, 'background': False, 'build_id': 9644096, 'buildroot_pkgs': [], 'chroot': 'fedora-rawhide-x86_64', 'enable_net': False, 'fedora_review': False, 'git_hash': 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd', 'git_repo': 'https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama', 'isolation': 'default', 'memory_reqs': 2048, 'package_name': 'ollama', 'package_version': '0.12.3-1', 'project_dirname': 'ollama', 'project_name': 'ollama', 'project_owner': 'fachep', 'repo_priority': None, 'repos': [{'baseurl': 'https://download.copr.fedorainfracloud.org/results/fachep/ollama/fedora-rawhide-x86_64/', 'id': 'copr_base', 'name': 'Copr repository', 'priority': None}, {'baseurl': 'https://developer.download.nvidia.cn/compute/cuda/repos/fedora42/x86_64/', 'id': 'https_developer_download_nvidia_cn_compute_cuda_repos_fedora42_x86_64', 'name': 'Additional repo https_developer_download_nvidia_cn_compute_cuda_repos_fedora42_x86_64'}, {'baseurl': 'https://developer.download.nvidia.cn/compute/cuda/repos/fedora41/x86_64/', 'id': 'https_developer_download_nvidia_cn_compute_cuda_repos_fedora41_x86_64', 'name': 'Additional repo https_developer_download_nvidia_cn_compute_cuda_repos_fedora41_x86_64'}], 'sandbox': 'fachep/ollama--fachep', 'source_json': {}, 'source_type': None, 'ssh_public_keys': None, 'storage': 0, 'submitter': 'fachep', 'tags': [], 'task_id': '9644096-fedora-rawhide-x86_64', 'timeout': 18000, 'uses_devel_repo': False, 'with_opts': [], 'without_opts': []} Running: git clone https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama /var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama --depth 500 --no-single-branch --recursive cmd: ['git', 'clone', 'https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama', '/var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama', '--depth', '500', '--no-single-branch', '--recursive'] cwd: . rc: 0 stdout: stderr: Cloning into '/var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama'... Running: git checkout bd90d2d0f4106e3a74de46dced869f2b79bfddfd -- cmd: ['git', 'checkout', 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd', '--'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama rc: 0 stdout: stderr: Note: switching to 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at bd90d2d automatic import of ollama Running: dist-git-client sources cmd: ['dist-git-client', 'sources'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama rc: 0 stdout: stderr: INFO: Reading stdout from command: git rev-parse --abbrev-ref HEAD INFO: Reading stdout from command: git rev-parse HEAD INFO: Reading sources specification file: sources INFO: Downloading ollama-0.12.3.tar.gz INFO: Reading stdout from command: curl --help all INFO: Calling: curl -H Pragma: -o ollama-0.12.3.tar.gz --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/fachep/ollama/ollama/ollama-0.12.3.tar.gz/md5/f096acee5e82596e9afd4d07ed477de2/ollama-0.12.3.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.5M 100 10.5M 0 0 284M 0 --:--:-- --:--:-- --:--:-- 292M INFO: Reading stdout from command: md5sum ollama-0.12.3.tar.gz INFO: Downloading vendor.tar.bz2 INFO: Calling: curl -H Pragma: -o vendor.tar.bz2 --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/fachep/ollama/ollama/vendor.tar.bz2/md5/c608d605610ed47b385cf54a6f6b2a2c/vendor.tar.bz2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6402k 100 6402k 0 0 212M 0 --:--:-- --:--:-- --:--:-- 215M INFO: Reading stdout from command: md5sum vendor.tar.bz2 tail: /var/lib/copr-rpmbuild/main.log: file truncated Running (timeout=18000): unbuffer mock --spec /var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1759552642.238252 -r /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: mock.py version 6.3 starting (python version = 3.13.7, NVR = mock-6.3-1.fc42), args: /usr/libexec/mock/mock --spec /var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1759552642.238252 -r /var/lib/copr-rpmbuild/results/configs/child.cfg Start(bootstrap): init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish(bootstrap): init plugins Start: init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish: init plugins INFO: Signal handler active Start: run INFO: Start(/var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama/ollama.spec) Config(fedora-rawhide-x86_64) Start: clean chroot Finish: clean chroot Mock Version: 6.3 INFO: Mock Version: 6.3 Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-rawhide-x86_64-bootstrap-1759552642.238252/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata INFO: Guessed host environment type: unknown INFO: Using container image: registry.fedoraproject.org/fedora:rawhide INFO: Pulling image: registry.fedoraproject.org/fedora:rawhide INFO: Tagging container image as mock-bootstrap-132fe5aa-92f2-4af1-a074-9ee6909883f7 INFO: Checking that 9d62463f5c705099ee8c0448300765fb264b090a392e29434e63301569fc9a61 image matches host's architecture INFO: Copy content of container 9d62463f5c705099ee8c0448300765fb264b090a392e29434e63301569fc9a61 to /var/lib/mock/fedora-rawhide-x86_64-bootstrap-1759552642.238252/root INFO: mounting 9d62463f5c705099ee8c0448300765fb264b090a392e29434e63301569fc9a61 with podman image mount INFO: image 9d62463f5c705099ee8c0448300765fb264b090a392e29434e63301569fc9a61 as /var/lib/containers/storage/overlay/bb03ae2544a51853701fb5428f452e09f7cf6d1afce705e0ea546cc38fb9674f/merged INFO: umounting image 9d62463f5c705099ee8c0448300765fb264b090a392e29434e63301569fc9a61 (/var/lib/containers/storage/overlay/bb03ae2544a51853701fb5428f452e09f7cf6d1afce705e0ea546cc38fb9674f/merged) with podman image umount INFO: Removing image mock-bootstrap-132fe5aa-92f2-4af1-a074-9ee6909883f7 INFO: Package manager dnf5 detected and used (fallback) INFO: Not updating bootstrap chroot, bootstrap_image_ready=True Start(bootstrap): creating root cache Finish(bootstrap): creating root cache Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-rawhide-x86_64-1759552642.238252/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Package manager dnf5 detected and used (direct choice) INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-6.0.0-1.fc44.x86_64 rpm-sequoia-1.9.0-2.fc43.x86_64 dnf5-5.2.17.0-2.fc44.x86_64 dnf5-plugins-5.2.17.0-2.fc44.x86_64 Start: installing minimal buildroot with dnf5 Updating and loading repositories: Copr repository 100% | 2.4 KiB/s | 1.6 KiB | 00m01s Additional repo https_developer_downlo 100% | 40.6 KiB/s | 47.8 KiB | 00m01s fedora 100% | 31.6 MiB/s | 42.1 MiB | 00m01s Additional repo https_developer_downlo 100% | 86.9 KiB/s | 109.0 KiB | 00m01s Repositories loaded. Package Arch Version Repository Size Installing group/module packages: bash x86_64 5.3.0-2.fc43 fedora 8.4 MiB bzip2 x86_64 1.0.8-21.fc43 fedora 95.3 KiB coreutils x86_64 9.8-3.fc44 fedora 5.4 MiB cpio x86_64 2.15-6.fc43 fedora 1.1 MiB diffutils x86_64 3.12-3.fc43 fedora 1.6 MiB fedora-release-common noarch 44-0.3 fedora 20.6 KiB findutils x86_64 1:4.10.0-6.fc43 fedora 1.8 MiB gawk x86_64 5.3.2-2.fc43 fedora 1.8 MiB glibc-minimal-langpack x86_64 2.42.9000-5.fc44 fedora 0.0 B grep x86_64 3.12-2.fc43 fedora 1.0 MiB gzip x86_64 1.14-1.fc44 fedora 397.8 KiB info x86_64 7.2-6.fc43 fedora 353.9 KiB patch x86_64 2.8-2.fc43 fedora 222.8 KiB redhat-rpm-config noarch 343-14.fc44 fedora 183.3 KiB rpm-build x86_64 6.0.0-1.fc44 fedora 287.4 KiB sed x86_64 4.9-5.fc43 fedora 857.3 KiB shadow-utils x86_64 2:4.18.0-3.fc43 fedora 3.9 MiB tar x86_64 2:1.35-6.fc43 fedora 2.9 MiB unzip x86_64 6.0-68.fc44 fedora 390.3 KiB util-linux x86_64 2.41.1-17.fc44 fedora 3.5 MiB which x86_64 2.23-3.fc43 fedora 83.5 KiB xz x86_64 1:5.8.1-2.fc43 fedora 1.3 MiB Installing dependencies: add-determinism x86_64 0.7.2-2.fc44 fedora 2.3 MiB alternatives x86_64 1.33-2.fc43 fedora 62.2 KiB ansible-srpm-macros noarch 1-18.1.fc43 fedora 35.7 KiB audit-libs x86_64 4.1.2-2.fc44 fedora 378.8 KiB binutils x86_64 2.45.50-4.fc44 fedora 27.3 MiB build-reproducibility-srpm-macros noarch 0.7.2-2.fc44 fedora 1.2 KiB bzip2-libs x86_64 1.0.8-21.fc43 fedora 80.6 KiB ca-certificates noarch 2025.2.80_v9.0.304-2.fc44 fedora 2.7 MiB coreutils-common x86_64 9.8-3.fc44 fedora 11.1 MiB crypto-policies noarch 20250714-5.gitcd6043a.fc44 fedora 146.9 KiB curl x86_64 8.16.0-1.fc44 fedora 475.3 KiB cyrus-sasl-lib x86_64 2.1.28-33.fc44 fedora 2.3 MiB debugedit x86_64 5.2-3.fc44 fedora 214.0 KiB dwz x86_64 0.16-2.fc43 fedora 287.1 KiB ed x86_64 1.22.2-1.fc44 fedora 148.1 KiB efi-srpm-macros noarch 6-4.fc43 fedora 40.1 KiB elfutils x86_64 0.193-3.fc43 fedora 2.9 MiB elfutils-debuginfod-client x86_64 0.193-3.fc43 fedora 83.9 KiB elfutils-default-yama-scope noarch 0.193-3.fc43 fedora 1.8 KiB elfutils-libelf x86_64 0.193-3.fc43 fedora 1.2 MiB elfutils-libs x86_64 0.193-3.fc43 fedora 683.4 KiB fedora-gpg-keys noarch 44-0.1 fedora 131.2 KiB fedora-release noarch 44-0.3 fedora 0.0 B fedora-release-identity-basic noarch 44-0.3 fedora 664.0 B fedora-repos noarch 44-0.1 fedora 4.9 KiB fedora-repos-rawhide noarch 44-0.1 fedora 2.2 KiB file x86_64 5.46-8.fc44 fedora 100.2 KiB file-libs x86_64 5.46-8.fc44 fedora 11.9 MiB filesystem x86_64 3.18-50.fc43 fedora 112.0 B filesystem-srpm-macros noarch 3.18-50.fc43 fedora 38.2 KiB fonts-srpm-macros noarch 1:5.0.0-1.fc44 fedora 55.8 KiB forge-srpm-macros noarch 0.4.0-3.fc43 fedora 38.9 KiB fpc-srpm-macros noarch 1.3-15.fc43 fedora 144.0 B gap-srpm-macros noarch 2-1.fc44 fedora 2.1 KiB gdb-minimal x86_64 16.3-6.fc44 fedora 13.3 MiB gdbm-libs x86_64 1:1.23-10.fc43 fedora 129.9 KiB ghc-srpm-macros noarch 1.9.2-3.fc43 fedora 779.0 B glibc x86_64 2.42.9000-5.fc44 fedora 6.7 MiB glibc-common x86_64 2.42.9000-5.fc44 fedora 1.0 MiB glibc-gconv-extra x86_64 2.42.9000-5.fc44 fedora 7.2 MiB gmp x86_64 1:6.3.0-4.fc44 fedora 815.3 KiB gnat-srpm-macros noarch 6-8.fc43 fedora 1.0 KiB gnulib-l10n noarch 20241231-1.fc44 fedora 655.0 KiB gnupg2 x86_64 2.4.8-4.fc43 fedora 6.5 MiB gnupg2-dirmngr x86_64 2.4.8-4.fc43 fedora 618.4 KiB gnupg2-gpg-agent x86_64 2.4.8-4.fc43 fedora 671.4 KiB gnupg2-gpgconf x86_64 2.4.8-4.fc43 fedora 250.0 KiB gnupg2-keyboxd x86_64 2.4.8-4.fc43 fedora 201.4 KiB gnupg2-verify x86_64 2.4.8-4.fc43 fedora 348.5 KiB gnutls x86_64 3.8.10-5.fc44 fedora 3.8 MiB go-srpm-macros noarch 3.8.0-1.fc44 fedora 61.9 KiB gpgverify noarch 2.2-3.fc43 fedora 8.7 KiB ima-evm-utils-libs x86_64 1.6.2-6.fc43 fedora 60.7 KiB jansson x86_64 2.14-3.fc43 fedora 89.1 KiB java-srpm-macros noarch 1-7.fc43 fedora 870.0 B json-c x86_64 0.18-7.fc43 fedora 82.7 KiB kernel-srpm-macros noarch 1.0-27.fc43 fedora 1.9 KiB keyutils-libs x86_64 1.6.3-6.fc43 fedora 54.3 KiB krb5-libs x86_64 1.21.3-8.fc44 fedora 2.3 MiB libacl x86_64 2.3.2-4.fc43 fedora 35.9 KiB libarchive x86_64 3.8.1-3.fc43 fedora 951.1 KiB libassuan x86_64 2.5.7-4.fc43 fedora 163.8 KiB libattr x86_64 2.5.2-6.fc43 fedora 24.4 KiB libblkid x86_64 2.41.1-17.fc44 fedora 262.4 KiB libbrotli x86_64 1.1.0-10.fc44 fedora 833.3 KiB libcap x86_64 2.76-3.fc44 fedora 209.1 KiB libcap-ng x86_64 0.8.5-8.fc44 fedora 68.9 KiB libcom_err x86_64 1.47.3-2.fc43 fedora 63.1 KiB libcurl x86_64 8.16.0-1.fc44 fedora 919.5 KiB libeconf x86_64 0.7.9-2.fc43 fedora 64.9 KiB libevent x86_64 2.1.12-16.fc43 fedora 883.1 KiB libfdisk x86_64 2.41.1-17.fc44 fedora 380.4 KiB libffi x86_64 3.5.2-1.fc44 fedora 83.8 KiB libfsverity x86_64 1.6-3.fc43 fedora 28.5 KiB libgcc x86_64 15.2.1-2.fc44 fedora 266.6 KiB libgcrypt x86_64 1.11.1-2.fc43 fedora 1.6 MiB libgomp x86_64 15.2.1-2.fc44 fedora 541.1 KiB libgpg-error x86_64 1.55-2.fc43 fedora 915.3 KiB libidn2 x86_64 2.3.8-2.fc43 fedora 552.5 KiB libksba x86_64 1.6.7-4.fc43 fedora 398.5 KiB liblastlog2 x86_64 2.41.1-17.fc44 fedora 33.9 KiB libmount x86_64 2.41.1-17.fc44 fedora 372.7 KiB libnghttp2 x86_64 1.67.1-1.fc44 fedora 162.2 KiB libpkgconf x86_64 2.3.0-3.fc43 fedora 78.1 KiB libpsl x86_64 0.21.5-6.fc43 fedora 76.4 KiB libselinux x86_64 3.9-5.fc44 fedora 193.1 KiB libselinux-utils x86_64 3.9-5.fc44 fedora 309.0 KiB libsemanage x86_64 3.9-4.fc44 fedora 308.5 KiB libsepol x86_64 3.9-2.fc43 fedora 822.0 KiB libsmartcols x86_64 2.41.1-17.fc44 fedora 180.5 KiB libssh x86_64 0.11.3-1.fc44 fedora 567.1 KiB libssh-config noarch 0.11.3-1.fc44 fedora 277.0 B libstdc++ x86_64 15.2.1-2.fc44 fedora 2.8 MiB libtasn1 x86_64 4.20.0-2.fc43 fedora 176.3 KiB libtool-ltdl x86_64 2.5.4-7.fc43 fedora 70.1 KiB libunistring x86_64 1.1-10.fc43 fedora 1.7 MiB libusb1 x86_64 1.0.29-4.fc44 fedora 171.3 KiB libuuid x86_64 2.41.1-17.fc44 fedora 37.4 KiB libverto x86_64 0.3.2-11.fc43 fedora 25.4 KiB libxcrypt x86_64 4.4.38-9.fc44 fedora 284.4 KiB libxml2 x86_64 2.12.10-5.fc44 fedora 1.7 MiB libzstd x86_64 1.5.7-3.fc44 fedora 940.3 KiB linkdupes x86_64 0.7.2-2.fc44 fedora 838.7 KiB lua-libs x86_64 5.4.8-2.fc43 fedora 280.8 KiB lua-srpm-macros noarch 1-16.fc43 fedora 1.3 KiB lz4-libs x86_64 1.10.0-3.fc43 fedora 161.4 KiB mpfr x86_64 4.2.2-2.fc43 fedora 832.8 KiB ncurses-base noarch 6.5-7.20250614.fc43 fedora 328.1 KiB ncurses-libs x86_64 6.5-7.20250614.fc43 fedora 946.3 KiB nettle x86_64 3.10.1-2.fc43 fedora 790.6 KiB npth x86_64 1.8-3.fc43 fedora 49.6 KiB ocaml-srpm-macros noarch 11-2.fc43 fedora 1.9 KiB openblas-srpm-macros noarch 2-20.fc43 fedora 112.0 B openldap x86_64 2.6.10-4.fc44 fedora 659.8 KiB openssl-libs x86_64 1:3.5.1-3.fc44 fedora 9.2 MiB p11-kit x86_64 0.25.8-1.fc44 fedora 2.3 MiB p11-kit-trust x86_64 0.25.8-1.fc44 fedora 446.5 KiB package-notes-srpm-macros noarch 0.5-14.fc43 fedora 1.6 KiB pam-libs x86_64 1.7.1-3.fc43 fedora 126.8 KiB pcre2 x86_64 10.46-1.fc44 fedora 697.7 KiB pcre2-syntax noarch 10.46-1.fc44 fedora 275.3 KiB perl-srpm-macros noarch 1-60.fc43 fedora 861.0 B pkgconf x86_64 2.3.0-3.fc43 fedora 88.5 KiB pkgconf-m4 noarch 2.3.0-3.fc43 fedora 14.4 KiB pkgconf-pkg-config x86_64 2.3.0-3.fc43 fedora 989.0 B policycoreutils x86_64 3.9-5.fc44 fedora 683.5 KiB popt x86_64 1.19-9.fc43 fedora 132.8 KiB publicsuffix-list-dafsa noarch 20250616-2.fc43 fedora 69.1 KiB pyproject-srpm-macros noarch 1.18.4-1.fc44 fedora 1.9 KiB python-srpm-macros noarch 3.14-8.fc44 fedora 51.6 KiB qt5-srpm-macros noarch 5.15.17-2.fc43 fedora 500.0 B qt6-srpm-macros noarch 6.9.2-1.fc44 fedora 464.0 B readline x86_64 8.3-2.fc43 fedora 511.7 KiB rpm x86_64 6.0.0-1.fc44 fedora 3.1 MiB rpm-build-libs x86_64 6.0.0-1.fc44 fedora 268.4 KiB rpm-libs x86_64 6.0.0-1.fc44 fedora 933.8 KiB rpm-plugin-selinux x86_64 6.0.0-1.fc44 fedora 12.0 KiB rpm-sequoia x86_64 1.9.0-2.fc43 fedora 2.5 MiB rpm-sign-libs x86_64 6.0.0-1.fc44 fedora 39.7 KiB rust-srpm-macros noarch 26.4-1.fc44 fedora 4.8 KiB selinux-policy noarch 42.11-1.fc44 fedora 31.7 KiB selinux-policy-targeted noarch 42.11-1.fc44 fedora 18.7 MiB setup noarch 2.15.0-26.fc43 fedora 725.0 KiB sqlite-libs x86_64 3.50.4-1.fc44 fedora 1.5 MiB systemd-libs x86_64 258-1.fc44 fedora 2.3 MiB systemd-standalone-sysusers x86_64 258-1.fc44 fedora 293.5 KiB tpm2-tss x86_64 4.1.3-8.fc43 fedora 1.6 MiB tree-sitter-srpm-macros noarch 0.4.2-1.fc43 fedora 8.3 KiB util-linux-core x86_64 2.41.1-17.fc44 fedora 1.5 MiB xxhash-libs x86_64 0.8.3-3.fc43 fedora 90.2 KiB xz-libs x86_64 1:5.8.1-2.fc43 fedora 217.8 KiB zig-srpm-macros noarch 1-5.fc43 fedora 1.1 KiB zip x86_64 3.0-44.fc43 fedora 694.5 KiB zlib-ng-compat x86_64 2.2.5-2.fc44 fedora 137.6 KiB zstd x86_64 1.5.7-3.fc44 fedora 506.2 KiB Installing groups: Buildsystem building group Transaction Summary: Installing: 177 packages Total size of inbound packages is 66 MiB. Need to download 66 MiB. After this operation, 219 MiB extra will be used (install 219 MiB, remove 0 B). [ 1/177] bzip2-0:1.0.8-21.fc43.x86_64 100% | 3.2 MiB/s | 51.6 KiB | 00m00s [ 2/177] coreutils-0:9.8-3.fc44.x86_64 100% | 57.4 MiB/s | 1.1 MiB | 00m00s [ 3/177] bash-0:5.3.0-2.fc43.x86_64 100% | 64.5 MiB/s | 1.9 MiB | 00m00s [ 4/177] cpio-0:2.15-6.fc43.x86_64 100% | 22.0 MiB/s | 293.1 KiB | 00m00s [ 5/177] diffutils-0:3.12-3.fc43.x86_6 100% | 42.6 MiB/s | 392.3 KiB | 00m00s [ 6/177] fedora-release-common-0:44-0. 100% | 8.1 MiB/s | 25.0 KiB | 00m00s [ 7/177] glibc-minimal-langpack-0:2.42 100% | 14.9 MiB/s | 45.7 KiB | 00m00s [ 8/177] findutils-1:4.10.0-6.fc43.x86 100% | 107.4 MiB/s | 550.0 KiB | 00m00s [ 9/177] grep-0:3.12-2.fc43.x86_64 100% | 58.4 MiB/s | 299.1 KiB | 00m00s [ 10/177] gzip-0:1.14-1.fc44.x86_64 100% | 43.4 MiB/s | 177.7 KiB | 00m00s [ 11/177] info-0:7.2-6.fc43.x86_64 100% | 44.6 MiB/s | 182.9 KiB | 00m00s [ 12/177] patch-0:2.8-2.fc43.x86_64 100% | 27.8 MiB/s | 113.8 KiB | 00m00s [ 13/177] redhat-rpm-config-0:343-14.fc 100% | 19.3 MiB/s | 79.2 KiB | 00m00s [ 14/177] rpm-build-0:6.0.0-1.fc44.x86_ 100% | 33.7 MiB/s | 138.0 KiB | 00m00s [ 15/177] sed-0:4.9-5.fc43.x86_64 100% | 77.4 MiB/s | 317.1 KiB | 00m00s [ 16/177] shadow-utils-2:4.18.0-3.fc43. 100% | 183.2 MiB/s | 1.3 MiB | 00m00s [ 17/177] tar-2:1.35-6.fc43.x86_64 100% | 119.5 MiB/s | 856.4 KiB | 00m00s [ 18/177] which-0:2.23-3.fc43.x86_64 100% | 20.4 MiB/s | 41.7 KiB | 00m00s [ 19/177] unzip-0:6.0-68.fc44.x86_64 100% | 30.0 MiB/s | 184.6 KiB | 00m00s [ 20/177] xz-1:5.8.1-2.fc43.x86_64 100% | 111.8 MiB/s | 572.5 KiB | 00m00s [ 21/177] gawk-0:5.3.2-2.fc43.x86_64 100% | 160.7 MiB/s | 1.1 MiB | 00m00s [ 22/177] util-linux-0:2.41.1-17.fc44.x 100% | 132.4 MiB/s | 1.2 MiB | 00m00s [ 23/177] filesystem-0:3.18-50.fc43.x86 100% | 121.2 MiB/s | 1.3 MiB | 00m00s [ 24/177] ncurses-libs-0:6.5-7.20250614 100% | 40.6 MiB/s | 332.7 KiB | 00m00s [ 25/177] glibc-0:2.42.9000-5.fc44.x86_ 100% | 170.0 MiB/s | 2.2 MiB | 00m00s [ 26/177] bzip2-libs-0:1.0.8-21.fc43.x8 100% | 8.4 MiB/s | 43.1 KiB | 00m00s [ 27/177] libacl-0:2.3.2-4.fc43.x86_64 100% | 5.9 MiB/s | 24.3 KiB | 00m00s [ 28/177] gmp-1:6.3.0-4.fc44.x86_64 100% | 44.5 MiB/s | 319.3 KiB | 00m00s [ 29/177] coreutils-common-0:9.8-3.fc44 100% | 161.1 MiB/s | 2.1 MiB | 00m00s [ 30/177] libattr-0:2.5.2-6.fc43.x86_64 100% | 3.5 MiB/s | 17.9 KiB | 00m00s [ 31/177] libcap-0:2.76-3.fc44.x86_64 100% | 17.0 MiB/s | 86.9 KiB | 00m00s [ 32/177] libselinux-0:3.9-5.fc44.x86_6 100% | 23.9 MiB/s | 97.8 KiB | 00m00s [ 33/177] systemd-libs-0:258-1.fc44.x86 100% | 133.5 MiB/s | 820.1 KiB | 00m00s [ 34/177] fedora-repos-0:44-0.1.noarch 100% | 1.8 MiB/s | 9.1 KiB | 00m00s [ 35/177] glibc-common-0:2.42.9000-5.fc 100% | 64.9 MiB/s | 332.4 KiB | 00m00s [ 36/177] pcre2-0:10.46-1.fc44.x86_64 100% | 64.0 MiB/s | 262.2 KiB | 00m00s [ 37/177] ed-0:1.22.2-1.fc44.x86_64 100% | 20.4 MiB/s | 83.7 KiB | 00m00s [ 38/177] ansible-srpm-macros-0:1-18.1. 100% | 6.5 MiB/s | 19.9 KiB | 00m00s [ 39/177] build-reproducibility-srpm-ma 100% | 6.3 MiB/s | 12.9 KiB | 00m00s [ 40/177] efi-srpm-macros-0:6-4.fc43.no 100% | 10.9 MiB/s | 22.4 KiB | 00m00s [ 41/177] dwz-0:0.16-2.fc43.x86_64 100% | 44.1 MiB/s | 135.5 KiB | 00m00s [ 42/177] file-0:5.46-8.fc44.x86_64 100% | 23.8 MiB/s | 48.8 KiB | 00m00s [ 43/177] filesystem-srpm-macros-0:3.18 100% | 8.6 MiB/s | 26.4 KiB | 00m00s [ 44/177] fonts-srpm-macros-1:5.0.0-1.f 100% | 8.9 MiB/s | 27.3 KiB | 00m00s [ 45/177] forge-srpm-macros-0:0.4.0-3.f 100% | 9.8 MiB/s | 20.1 KiB | 00m00s [ 46/177] fpc-srpm-macros-0:1.3-15.fc43 100% | 7.7 MiB/s | 7.9 KiB | 00m00s [ 47/177] gap-srpm-macros-0:2-1.fc44.no 100% | 4.4 MiB/s | 9.1 KiB | 00m00s [ 48/177] gnat-srpm-macros-0:6-8.fc43.n 100% | 4.1 MiB/s | 8.5 KiB | 00m00s [ 49/177] ghc-srpm-macros-0:1.9.2-3.fc4 100% | 4.3 MiB/s | 8.7 KiB | 00m00s [ 50/177] go-srpm-macros-0:3.8.0-1.fc44 100% | 13.8 MiB/s | 28.3 KiB | 00m00s [ 51/177] java-srpm-macros-0:1-7.fc43.n 100% | 3.9 MiB/s | 7.9 KiB | 00m00s [ 52/177] kernel-srpm-macros-0:1.0-27.f 100% | 8.7 MiB/s | 8.9 KiB | 00m00s [ 53/177] lua-srpm-macros-0:1-16.fc43.n 100% | 8.6 MiB/s | 8.8 KiB | 00m00s [ 54/177] ocaml-srpm-macros-0:11-2.fc43 100% | 9.0 MiB/s | 9.3 KiB | 00m00s [ 55/177] openblas-srpm-macros-0:2-20.f 100% | 7.4 MiB/s | 7.6 KiB | 00m00s [ 56/177] package-notes-srpm-macros-0:0 100% | 4.4 MiB/s | 9.0 KiB | 00m00s [ 57/177] perl-srpm-macros-0:1-60.fc43. 100% | 4.0 MiB/s | 8.3 KiB | 00m00s [ 58/177] pyproject-srpm-macros-0:1.18. 100% | 6.7 MiB/s | 13.7 KiB | 00m00s [ 59/177] qt5-srpm-macros-0:5.15.17-2.f 100% | 4.2 MiB/s | 8.7 KiB | 00m00s [ 60/177] python-srpm-macros-0:3.14-8.f 100% | 11.5 MiB/s | 23.7 KiB | 00m00s [ 61/177] qt6-srpm-macros-0:6.9.2-1.fc4 100% | 4.6 MiB/s | 9.4 KiB | 00m00s [ 62/177] rust-srpm-macros-0:26.4-1.fc4 100% | 5.4 MiB/s | 11.2 KiB | 00m00s [ 63/177] tree-sitter-srpm-macros-0:0.4 100% | 6.5 MiB/s | 13.4 KiB | 00m00s [ 64/177] rpm-0:6.0.0-1.fc44.x86_64 100% | 140.8 MiB/s | 576.6 KiB | 00m00s [ 65/177] zig-srpm-macros-0:1-5.fc43.no 100% | 2.7 MiB/s | 8.4 KiB | 00m00s [ 66/177] zip-0:3.0-44.fc43.x86_64 100% | 51.1 MiB/s | 261.6 KiB | 00m00s [ 67/177] debugedit-0:5.2-3.fc44.x86_64 100% | 16.7 MiB/s | 85.6 KiB | 00m00s [ 68/177] elfutils-0:0.193-3.fc43.x86_6 100% | 111.6 MiB/s | 571.3 KiB | 00m00s [ 69/177] elfutils-libelf-0:0.193-3.fc4 100% | 50.7 MiB/s | 207.8 KiB | 00m00s [ 70/177] libarchive-0:3.8.1-3.fc43.x86 100% | 102.8 MiB/s | 421.1 KiB | 00m00s [ 71/177] libgcc-0:15.2.1-2.fc44.x86_64 100% | 26.0 MiB/s | 133.0 KiB | 00m00s [ 72/177] libstdc++-0:15.2.1-2.fc44.x86 100% | 128.4 MiB/s | 920.1 KiB | 00m00s [ 73/177] popt-0:1.19-9.fc43.x86_64 100% | 9.2 MiB/s | 65.7 KiB | 00m00s [ 74/177] readline-0:8.3-2.fc43.x86_64 100% | 36.6 MiB/s | 224.6 KiB | 00m00s [ 75/177] rpm-libs-0:6.0.0-1.fc44.x86_6 100% | 97.8 MiB/s | 400.5 KiB | 00m00s [ 76/177] zstd-0:1.5.7-3.fc44.x86_64 100% | 61.7 MiB/s | 189.5 KiB | 00m00s [ 77/177] rpm-build-libs-0:6.0.0-1.fc44 100% | 25.0 MiB/s | 127.9 KiB | 00m00s [ 78/177] libeconf-0:0.7.9-2.fc43.x86_6 100% | 17.2 MiB/s | 35.2 KiB | 00m00s [ 79/177] audit-libs-0:4.1.2-2.fc44.x86 100% | 45.1 MiB/s | 138.4 KiB | 00m00s [ 80/177] libsemanage-0:3.9-4.fc44.x86_ 100% | 40.2 MiB/s | 123.5 KiB | 00m00s [ 81/177] libxcrypt-0:4.4.38-9.fc44.x86 100% | 41.4 MiB/s | 127.1 KiB | 00m00s [ 82/177] pam-libs-0:1.7.1-3.fc43.x86_6 100% | 18.7 MiB/s | 57.5 KiB | 00m00s [ 83/177] setup-0:2.15.0-26.fc43.noarch 100% | 51.2 MiB/s | 157.3 KiB | 00m00s [ 84/177] xz-libs-1:5.8.1-2.fc43.x86_64 100% | 36.8 MiB/s | 112.9 KiB | 00m00s [ 85/177] mpfr-0:4.2.2-2.fc43.x86_64 100% | 112.9 MiB/s | 347.0 KiB | 00m00s [ 86/177] libblkid-0:2.41.1-17.fc44.x86 100% | 40.1 MiB/s | 123.2 KiB | 00m00s [ 87/177] libcap-ng-0:0.8.5-8.fc44.x86_ 100% | 15.7 MiB/s | 32.2 KiB | 00m00s [ 88/177] liblastlog2-0:2.41.1-17.fc44. 100% | 11.3 MiB/s | 23.2 KiB | 00m00s [ 89/177] libfdisk-0:2.41.1-17.fc44.x86 100% | 52.5 MiB/s | 161.3 KiB | 00m00s [ 90/177] libmount-0:2.41.1-17.fc44.x86 100% | 52.9 MiB/s | 162.6 KiB | 00m00s [ 91/177] libuuid-0:2.41.1-17.fc44.x86_ 100% | 12.8 MiB/s | 26.3 KiB | 00m00s [ 92/177] libsmartcols-0:2.41.1-17.fc44 100% | 27.4 MiB/s | 84.0 KiB | 00m00s [ 93/177] util-linux-core-0:2.41.1-17.f 100% | 134.5 MiB/s | 550.7 KiB | 00m00s [ 94/177] zlib-ng-compat-0:2.2.5-2.fc44 100% | 15.5 MiB/s | 79.2 KiB | 00m00s [ 95/177] glibc-gconv-extra-0:2.42.9000 100% | 176.9 MiB/s | 1.6 MiB | 00m00s [ 96/177] ncurses-base-0:6.5-7.20250614 100% | 14.4 MiB/s | 88.2 KiB | 00m00s [ 97/177] gnulib-l10n-0:20241231-1.fc44 100% | 24.4 MiB/s | 150.2 KiB | 00m00s [ 98/177] fedora-repos-rawhide-0:44-0.1 100% | 8.4 MiB/s | 8.6 KiB | 00m00s [ 99/177] libsepol-0:3.9-2.fc43.x86_64 100% | 67.5 MiB/s | 345.4 KiB | 00m00s [100/177] fedora-gpg-keys-0:44-0.1.noar 100% | 27.1 MiB/s | 138.8 KiB | 00m00s [101/177] pcre2-syntax-0:10.46-1.fc44.n 100% | 52.8 MiB/s | 162.2 KiB | 00m00s [102/177] linkdupes-0:0.7.2-2.fc44.x86_ 100% | 87.0 MiB/s | 356.3 KiB | 00m00s [103/177] add-determinism-0:0.7.2-2.fc4 100% | 123.8 MiB/s | 887.6 KiB | 00m00s [104/177] file-libs-0:5.46-8.fc44.x86_6 100% | 118.6 MiB/s | 849.9 KiB | 00m00s [105/177] curl-0:8.16.0-1.fc44.x86_64 100% | 46.1 MiB/s | 235.9 KiB | 00m00s [106/177] elfutils-libs-0:0.193-3.fc43. 100% | 65.9 MiB/s | 269.7 KiB | 00m00s [107/177] elfutils-debuginfod-client-0: 100% | 22.9 MiB/s | 46.8 KiB | 00m00s [108/177] libzstd-0:1.5.7-3.fc44.x86_64 100% | 87.7 MiB/s | 359.1 KiB | 00m00s [109/177] libxml2-0:2.12.10-5.fc44.x86_ 100% | 135.3 MiB/s | 692.7 KiB | 00m00s [110/177] lz4-libs-0:1.10.0-3.fc43.x86_ 100% | 15.2 MiB/s | 78.0 KiB | 00m00s [111/177] libgomp-0:15.2.1-2.fc44.x86_6 100% | 121.4 MiB/s | 372.9 KiB | 00m00s [112/177] rpm-sign-libs-0:6.0.0-1.fc44. 100% | 13.8 MiB/s | 28.2 KiB | 00m00s [113/177] lua-libs-0:5.4.8-2.fc43.x86_6 100% | 42.9 MiB/s | 131.7 KiB | 00m00s [114/177] rpm-sequoia-0:1.9.0-2.fc43.x8 100% | 113.9 MiB/s | 933.3 KiB | 00m00s [115/177] sqlite-libs-0:3.50.4-1.fc44.x 100% | 82.6 MiB/s | 761.5 KiB | 00m00s [116/177] elfutils-default-yama-scope-0 100% | 1.5 MiB/s | 12.4 KiB | 00m00s [117/177] json-c-0:0.18-7.fc43.x86_64 100% | 11.0 MiB/s | 45.0 KiB | 00m00s [118/177] ima-evm-utils-libs-0:1.6.2-6. 100% | 9.5 MiB/s | 29.3 KiB | 00m00s [119/177] libfsverity-0:1.6-3.fc43.x86_ 100% | 6.1 MiB/s | 18.6 KiB | 00m00s [120/177] gnupg2-0:2.4.8-4.fc43.x86_64 100% | 164.4 MiB/s | 1.6 MiB | 00m00s [121/177] gpgverify-0:2.2-3.fc43.noarch 100% | 2.2 MiB/s | 11.1 KiB | 00m00s [122/177] gnupg2-dirmngr-0:2.4.8-4.fc43 100% | 67.1 MiB/s | 274.6 KiB | 00m00s [123/177] openssl-libs-1:3.5.1-3.fc44.x 100% | 169.3 MiB/s | 2.5 MiB | 00m00s [124/177] gnupg2-gpg-agent-0:2.4.8-4.fc 100% | 38.1 MiB/s | 272.9 KiB | 00m00s [125/177] gnupg2-gpgconf-0:2.4.8-4.fc43 100% | 18.7 MiB/s | 115.0 KiB | 00m00s [126/177] gnupg2-keyboxd-0:2.4.8-4.fc43 100% | 23.1 MiB/s | 94.7 KiB | 00m00s [127/177] gnupg2-verify-0:2.4.8-4.fc43. 100% | 55.7 MiB/s | 171.2 KiB | 00m00s [128/177] libassuan-0:2.5.7-4.fc43.x86_ 100% | 21.9 MiB/s | 67.4 KiB | 00m00s [129/177] libgcrypt-0:1.11.1-2.fc43.x86 100% | 145.5 MiB/s | 595.8 KiB | 00m00s [130/177] npth-0:1.8-3.fc43.x86_64 100% | 8.4 MiB/s | 25.7 KiB | 00m00s [131/177] libgpg-error-0:1.55-2.fc43.x8 100% | 59.6 MiB/s | 244.3 KiB | 00m00s [132/177] tpm2-tss-0:4.1.3-8.fc43.x86_6 100% | 104.0 MiB/s | 425.9 KiB | 00m00s [133/177] ca-certificates-0:2025.2.80_v 100% | 158.5 MiB/s | 973.8 KiB | 00m00s [134/177] crypto-policies-0:20250714-5. 100% | 16.0 MiB/s | 98.5 KiB | 00m00s [135/177] gnutls-0:3.8.10-5.fc44.x86_64 100% | 175.4 MiB/s | 1.4 MiB | 00m00s [136/177] libksba-0:1.6.7-4.fc43.x86_64 100% | 26.1 MiB/s | 160.4 KiB | 00m00s [137/177] openldap-0:2.6.10-4.fc44.x86_ 100% | 42.2 MiB/s | 259.5 KiB | 00m00s [138/177] libusb1-0:1.0.29-4.fc44.x86_6 100% | 26.0 MiB/s | 79.9 KiB | 00m00s [139/177] libidn2-0:2.3.8-2.fc43.x86_64 100% | 56.9 MiB/s | 174.9 KiB | 00m00s [140/177] libtasn1-0:4.20.0-2.fc43.x86_ 100% | 24.2 MiB/s | 74.5 KiB | 00m00s [141/177] nettle-0:3.10.1-2.fc43.x86_64 100% | 103.6 MiB/s | 424.2 KiB | 00m00s [142/177] libunistring-0:1.1-10.fc43.x8 100% | 106.0 MiB/s | 542.9 KiB | 00m00s [143/177] p11-kit-0:0.25.8-1.fc44.x86_6 100% | 83.0 MiB/s | 510.0 KiB | 00m00s [144/177] libevent-0:2.1.12-16.fc43.x86 100% | 83.9 MiB/s | 257.8 KiB | 00m00s [145/177] cyrus-sasl-lib-0:2.1.28-33.fc 100% | 129.6 MiB/s | 796.5 KiB | 00m00s [146/177] libtool-ltdl-0:2.5.4-7.fc43.x 100% | 8.8 MiB/s | 36.2 KiB | 00m00s [147/177] libffi-0:3.5.2-1.fc44.x86_64 100% | 20.0 MiB/s | 41.1 KiB | 00m00s [148/177] gdbm-libs-1:1.23-10.fc43.x86_ 100% | 18.5 MiB/s | 56.8 KiB | 00m00s [149/177] alternatives-0:1.33-2.fc43.x8 100% | 13.2 MiB/s | 40.7 KiB | 00m00s [150/177] jansson-0:2.14-3.fc43.x86_64 100% | 14.7 MiB/s | 45.3 KiB | 00m00s [151/177] pkgconf-pkg-config-0:2.3.0-3. 100% | 3.1 MiB/s | 9.6 KiB | 00m00s [152/177] pkgconf-0:2.3.0-3.fc43.x86_64 100% | 14.5 MiB/s | 44.6 KiB | 00m00s [153/177] pkgconf-m4-0:2.3.0-3.fc43.noa 100% | 2.7 MiB/s | 13.9 KiB | 00m00s [154/177] libpkgconf-0:2.3.0-3.fc43.x86 100% | 4.6 MiB/s | 37.9 KiB | 00m00s [155/177] p11-kit-trust-0:0.25.8-1.fc44 100% | 17.1 MiB/s | 139.7 KiB | 00m00s [156/177] fedora-release-0:44-0.3.noarc 100% | 2.3 MiB/s | 13.9 KiB | 00m00s [157/177] binutils-0:2.45.50-4.fc44.x86 100% | 191.3 MiB/s | 5.9 MiB | 00m00s [158/177] systemd-standalone-sysusers-0 100% | 12.8 MiB/s | 143.8 KiB | 00m00s [159/177] xxhash-libs-0:0.8.3-3.fc43.x8 100% | 7.5 MiB/s | 38.5 KiB | 00m00s [160/177] fedora-release-identity-basic 100% | 3.6 MiB/s | 14.6 KiB | 00m00s [161/177] libcurl-0:8.16.0-1.fc44.x86_6 100% | 50.2 MiB/s | 410.9 KiB | 00m00s [162/177] krb5-libs-0:1.21.3-8.fc44.x86 100% | 82.5 MiB/s | 760.8 KiB | 00m00s [163/177] gdb-minimal-0:16.3-6.fc44.x86 100% | 142.1 MiB/s | 4.4 MiB | 00m00s [164/177] libbrotli-0:1.1.0-10.fc44.x86 100% | 30.1 MiB/s | 339.1 KiB | 00m00s [165/177] libnghttp2-0:1.67.1-1.fc44.x8 100% | 7.1 MiB/s | 73.1 KiB | 00m00s [166/177] libpsl-0:0.21.5-6.fc43.x86_64 100% | 21.1 MiB/s | 65.0 KiB | 00m00s [167/177] keyutils-libs-0:1.6.3-6.fc43. 100% | 15.3 MiB/s | 31.4 KiB | 00m00s [168/177] libssh-0:0.11.3-1.fc44.x86_64 100% | 75.8 MiB/s | 232.8 KiB | 00m00s [169/177] libcom_err-0:1.47.3-2.fc43.x8 100% | 8.7 MiB/s | 26.8 KiB | 00m00s [170/177] libverto-0:0.3.2-11.fc43.x86_ 100% | 6.7 MiB/s | 20.7 KiB | 00m00s [171/177] publicsuffix-list-dafsa-0:202 100% | 19.3 MiB/s | 59.2 KiB | 00m00s [172/177] libssh-config-0:0.11.3-1.fc44 100% | 4.4 MiB/s | 9.1 KiB | 00m00s [173/177] policycoreutils-0:3.9-5.fc44. 100% | 69.9 MiB/s | 214.6 KiB | 00m00s [174/177] selinux-policy-0:42.11-1.fc44 100% | 19.8 MiB/s | 60.9 KiB | 00m00s [175/177] rpm-plugin-selinux-0:6.0.0-1. 100% | 6.3 MiB/s | 19.5 KiB | 00m00s [176/177] libselinux-utils-0:3.9-5.fc44 100% | 23.3 MiB/s | 119.3 KiB | 00m00s [177/177] selinux-policy-targeted-0:42. 100% | 283.0 MiB/s | 6.8 MiB | 00m00s -------------------------------------------------------------------------------- [177/177] Total 100% | 187.1 MiB/s | 66.4 MiB | 00m00s Running transaction Importing OpenPGP key 0x6D9F90A6: UserID : "Fedora (44) " Fingerprint: 36F612DCF27F7D1A48A835E4DBFCF71C6D9F90A6 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-44-primary The key was successfully imported. Importing OpenPGP key 0x6D9F90A6: UserID : "Fedora (44) " Fingerprint: 36F612DCF27F7D1A48A835E4DBFCF71C6D9F90A6 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-44-primary The key was successfully imported. Importing OpenPGP key 0x31645531: UserID : "Fedora (43) " Fingerprint: C6E7F081CF80E13146676E88829B606631645531 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-43-primary The key was successfully imported. Importing OpenPGP key 0xF577861E: UserID : "Fedora (45) " Fingerprint: 4F50A6114CD5C6976A7F1179655A4B02F577861E From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-45-primary The key was successfully imported. [ 1/179] Verify package files 100% | 662.0 B/s | 177.0 B | 00m00s [ 2/179] Prepare transaction 100% | 3.1 KiB/s | 177.0 B | 00m00s [ 3/179] Installing libgcc-0:15.2.1-2. 100% | 262.0 MiB/s | 268.3 KiB | 00m00s [ 4/179] Installing libssh-config-0:0. 100% | 0.0 B/s | 816.0 B | 00m00s [ 5/179] Installing publicsuffix-list- 100% | 0.0 B/s | 69.8 KiB | 00m00s [ 6/179] Installing fedora-release-ide 100% | 0.0 B/s | 920.0 B | 00m00s [ 7/179] Installing fedora-gpg-keys-0: 100% | 43.7 MiB/s | 179.0 KiB | 00m00s [ 8/179] Installing fedora-repos-rawhi 100% | 0.0 B/s | 2.4 KiB | 00m00s [ 9/179] Installing fedora-repos-0:44- 100% | 0.0 B/s | 5.7 KiB | 00m00s [ 10/179] Installing fedora-release-com 100% | 24.3 MiB/s | 24.9 KiB | 00m00s [ 11/179] Installing fedora-release-0:4 100% | 15.1 KiB/s | 124.0 B | 00m00s >>> Running sysusers scriptlet: setup-0:2.15.0-26.fc43.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-26.fc43.noarch >>> Scriptlet output: >>> Creating group 'adm' with GID 4. >>> Creating group 'audio' with GID 63. >>> Creating group 'cdrom' with GID 11. >>> Creating group 'clock' with GID 103. >>> Creating group 'dialout' with GID 18. >>> Creating group 'disk' with GID 6. >>> Creating group 'floppy' with GID 19. >>> Creating group 'ftp' with GID 50. >>> Creating group 'games' with GID 20. >>> Creating group 'input' with GID 104. >>> Creating group 'kmem' with GID 9. >>> Creating group 'kvm' with GID 36. >>> Creating group 'lock' with GID 54. >>> Creating group 'lp' with GID 7. >>> Creating group 'mail' with GID 12. >>> Creating group 'man' with GID 15. >>> Creating group 'mem' with GID 8. >>> Creating group 'nobody' with GID 65534. >>> Creating group 'render' with GID 105. >>> Creating group 'root' with GID 0. >>> Creating group 'sgx' with GID 106. >>> Creating group 'sys' with GID 3. >>> Creating group 'tape' with GID 33. >>> Creating group 'tty' with GID 5. >>> Creating group 'users' with GID 100. >>> Creating group 'utmp' with GID 22. >>> Creating group 'video' with GID 39. >>> Creating group 'wheel' with GID 10. >>> Creating user 'adm' (adm) with UID 3 and GID 4. >>> Creating group 'bin' with GID 1. >>> Creating user 'bin' (bin) with UID 1 and GID 1. >>> Creating group 'daemon' with GID 2. >>> Creating user 'daemon' (daemon) with UID 2 and GID 2. >>> Creating user 'ftp' (FTP User) with UID 14 and GID 50. >>> Creating user 'games' (games) with UID 12 and GID 100. >>> Creating user 'halt' (halt) with UID 7 and GID 0. >>> Creating user 'lp' (lp) with UID 4 and GID 7. >>> Creating user 'mail' (mail) with UID 8 and GID 12. >>> Creating user 'nobody' (Kernel Overflow User) with UID 65534 and GID 65534. >>> Creating user 'operator' (operator) with UID 11 and GID 0. >>> Creating user 'root' (Super User) with UID 0 and GID 0. >>> Creating user 'shutdown' (shutdown) with UID 6 and GID 0. >>> Creating user 'sync' (sync) with UID 5 and GID 0. >>> [ 12/179] Installing setup-0:2.15.0-26. 100% | 47.6 MiB/s | 730.6 KiB | 00m00s >>> [RPM] /etc/hosts created as /etc/hosts.rpmnew [ 13/179] Installing filesystem-0:3.18- 100% | 2.7 MiB/s | 212.8 KiB | 00m00s [ 14/179] Installing pkgconf-m4-0:2.3.0 100% | 0.0 B/s | 14.8 KiB | 00m00s [ 15/179] Installing pcre2-syntax-0:10. 100% | 271.2 MiB/s | 277.8 KiB | 00m00s [ 16/179] Installing gnulib-l10n-0:2024 100% | 215.5 MiB/s | 661.9 KiB | 00m00s [ 17/179] Installing coreutils-common-0 100% | 385.3 MiB/s | 11.2 MiB | 00m00s [ 18/179] Installing ncurses-base-0:6.5 100% | 86.3 MiB/s | 353.5 KiB | 00m00s [ 19/179] Installing bash-0:5.3.0-2.fc4 100% | 263.4 MiB/s | 8.4 MiB | 00m00s [ 20/179] Installing glibc-common-0:2.4 100% | 60.0 MiB/s | 1.0 MiB | 00m00s [ 21/179] Installing glibc-gconv-extra- 100% | 270.8 MiB/s | 7.3 MiB | 00m00s [ 22/179] Installing glibc-0:2.42.9000- 100% | 176.3 MiB/s | 6.7 MiB | 00m00s [ 23/179] Installing ncurses-libs-0:6.5 100% | 232.6 MiB/s | 952.8 KiB | 00m00s [ 24/179] Installing glibc-minimal-lang 100% | 0.0 B/s | 124.0 B | 00m00s [ 25/179] Installing zlib-ng-compat-0:2 100% | 135.2 MiB/s | 138.4 KiB | 00m00s [ 26/179] Installing bzip2-libs-0:1.0.8 100% | 79.8 MiB/s | 81.7 KiB | 00m00s [ 27/179] Installing libgpg-error-0:1.5 100% | 60.0 MiB/s | 921.1 KiB | 00m00s [ 28/179] Installing libstdc++-0:15.2.1 100% | 355.5 MiB/s | 2.8 MiB | 00m00s [ 29/179] Installing libassuan-0:2.5.7- 100% | 161.7 MiB/s | 165.6 KiB | 00m00s [ 30/179] Installing libgcrypt-0:1.11.1 100% | 393.8 MiB/s | 1.6 MiB | 00m00s [ 31/179] Installing readline-0:8.3-2.f 100% | 250.9 MiB/s | 513.9 KiB | 00m00s [ 32/179] Installing gmp-1:6.3.0-4.fc44 100% | 399.2 MiB/s | 817.5 KiB | 00m00s [ 33/179] Installing xz-libs-1:5.8.1-2. 100% | 213.8 MiB/s | 218.9 KiB | 00m00s [ 34/179] Installing libuuid-0:2.41.1-1 100% | 0.0 B/s | 38.5 KiB | 00m00s [ 35/179] Installing popt-0:1.19-9.fc43 100% | 68.1 MiB/s | 139.4 KiB | 00m00s [ 36/179] Installing libzstd-0:1.5.7-3. 100% | 306.5 MiB/s | 941.6 KiB | 00m00s [ 37/179] Installing elfutils-libelf-0: 100% | 388.8 MiB/s | 1.2 MiB | 00m00s [ 38/179] Installing npth-0:1.8-3.fc43. 100% | 0.0 B/s | 50.7 KiB | 00m00s [ 39/179] Installing libblkid-0:2.41.1- 100% | 257.3 MiB/s | 263.5 KiB | 00m00s [ 40/179] Installing libxcrypt-0:4.4.38 100% | 280.4 MiB/s | 287.1 KiB | 00m00s [ 41/179] Installing libsepol-0:3.9-2.f 100% | 401.8 MiB/s | 822.9 KiB | 00m00s [ 42/179] Installing sqlite-libs-0:3.50 100% | 379.1 MiB/s | 1.5 MiB | 00m00s [ 43/179] Installing gnupg2-gpgconf-0:2 100% | 18.9 MiB/s | 252.0 KiB | 00m00s [ 44/179] Installing libattr-0:2.5.2-6. 100% | 0.0 B/s | 25.4 KiB | 00m00s [ 45/179] Installing libacl-0:2.3.2-4.f 100% | 0.0 B/s | 36.8 KiB | 00m00s [ 46/179] Installing pcre2-0:10.46-1.fc 100% | 341.4 MiB/s | 699.1 KiB | 00m00s [ 47/179] Installing libselinux-0:3.9-5 100% | 189.8 MiB/s | 194.4 KiB | 00m00s [ 48/179] Installing grep-0:3.12-2.fc43 100% | 62.7 MiB/s | 1.0 MiB | 00m00s [ 49/179] Installing sed-0:4.9-5.fc43.x 100% | 56.3 MiB/s | 865.5 KiB | 00m00s [ 50/179] Installing findutils-1:4.10.0 100% | 109.3 MiB/s | 1.9 MiB | 00m00s [ 51/179] Installing libtasn1-0:4.20.0- 100% | 173.9 MiB/s | 178.1 KiB | 00m00s [ 52/179] Installing libunistring-0:1.1 100% | 345.3 MiB/s | 1.7 MiB | 00m00s [ 53/179] Installing libidn2-0:2.3.8-2. 100% | 54.6 MiB/s | 558.7 KiB | 00m00s [ 54/179] Installing crypto-policies-0: 100% | 33.6 MiB/s | 172.0 KiB | 00m00s [ 55/179] Installing xz-1:5.8.1-2.fc43. 100% | 70.1 MiB/s | 1.3 MiB | 00m00s [ 56/179] Installing libmount-0:2.41.1- 100% | 182.5 MiB/s | 373.8 KiB | 00m00s [ 57/179] Installing gnupg2-verify-0:2. 100% | 26.3 MiB/s | 349.9 KiB | 00m00s [ 58/179] Installing dwz-0:0.16-2.fc43. 100% | 21.7 MiB/s | 288.5 KiB | 00m00s [ 59/179] Installing mpfr-0:4.2.2-2.fc4 100% | 271.6 MiB/s | 834.4 KiB | 00m00s [ 60/179] Installing gawk-0:5.3.2-2.fc4 100% | 95.6 MiB/s | 1.8 MiB | 00m00s [ 61/179] Installing libksba-0:1.6.7-4. 100% | 195.8 MiB/s | 401.1 KiB | 00m00s [ 62/179] Installing unzip-0:6.0-68.fc4 100% | 29.6 MiB/s | 393.8 KiB | 00m00s [ 63/179] Installing file-libs-0:5.46-8 100% | 624.1 MiB/s | 11.9 MiB | 00m00s [ 64/179] Installing file-0:5.46-8.fc44 100% | 7.6 MiB/s | 101.7 KiB | 00m00s [ 65/179] Installing diffutils-0:3.12-3 100% | 91.8 MiB/s | 1.6 MiB | 00m00s [ 66/179] Installing libeconf-0:0.7.9-2 100% | 65.0 MiB/s | 66.5 KiB | 00m00s [ 67/179] Installing libcap-ng-0:0.8.5- 100% | 69.2 MiB/s | 70.8 KiB | 00m00s [ 68/179] Installing audit-libs-0:4.1.2 100% | 186.3 MiB/s | 381.5 KiB | 00m00s [ 69/179] Installing pam-libs-0:1.7.1-3 100% | 126.0 MiB/s | 129.0 KiB | 00m00s [ 70/179] Installing libcap-0:2.76-3.fc 100% | 14.9 MiB/s | 214.3 KiB | 00m00s [ 71/179] Installing systemd-libs-0:258 100% | 332.1 MiB/s | 2.3 MiB | 00m00s [ 72/179] Installing libsemanage-0:3.9- 100% | 151.5 MiB/s | 310.2 KiB | 00m00s [ 73/179] Installing libsmartcols-0:2.4 100% | 177.3 MiB/s | 181.6 KiB | 00m00s [ 74/179] Installing lua-libs-0:5.4.8-2 100% | 275.3 MiB/s | 281.9 KiB | 00m00s [ 75/179] Installing json-c-0:0.18-7.fc 100% | 82.0 MiB/s | 84.0 KiB | 00m00s [ 76/179] Installing libffi-0:3.5.2-1.f 100% | 83.2 MiB/s | 85.2 KiB | 00m00s [ 77/179] Installing p11-kit-0:0.25.8-1 100% | 109.1 MiB/s | 2.3 MiB | 00m00s [ 78/179] Installing alternatives-0:1.3 100% | 4.8 MiB/s | 63.8 KiB | 00m00s [ 79/179] Installing p11-kit-trust-0:0. 100% | 19.9 MiB/s | 448.3 KiB | 00m00s [ 80/179] Installing openssl-libs-1:3.5 100% | 368.8 MiB/s | 9.2 MiB | 00m00s [ 81/179] Installing coreutils-0:9.8-3. 100% | 152.1 MiB/s | 5.5 MiB | 00m00s [ 82/179] Installing ca-certificates-0: 100% | 2.0 MiB/s | 2.5 MiB | 00m01s [ 83/179] Installing gzip-0:1.14-1.fc44 100% | 26.3 MiB/s | 403.3 KiB | 00m00s [ 84/179] Installing rpm-sequoia-0:1.9. 100% | 354.1 MiB/s | 2.5 MiB | 00m00s [ 85/179] Installing libfsverity-0:1.6- 100% | 28.8 MiB/s | 29.5 KiB | 00m00s [ 86/179] Installing libevent-0:2.1.12- 100% | 288.7 MiB/s | 886.8 KiB | 00m00s [ 87/179] Installing util-linux-core-0: 100% | 77.9 MiB/s | 1.5 MiB | 00m00s [ 88/179] Installing libusb1-0:1.0.29-4 100% | 18.8 MiB/s | 172.9 KiB | 00m00s >>> Running sysusers scriptlet: tpm2-tss-0:4.1.3-8.fc43.x86_64 >>> Finished sysusers scriptlet: tpm2-tss-0:4.1.3-8.fc43.x86_64 >>> Scriptlet output: >>> Creating group 'tss' with GID 59. >>> Creating user 'tss' (Account used for TPM access) with UID 59 and GID 59. >>> [ 89/179] Installing tpm2-tss-0:4.1.3-8 100% | 262.0 MiB/s | 1.6 MiB | 00m00s [ 90/179] Installing ima-evm-utils-libs 100% | 60.5 MiB/s | 62.0 KiB | 00m00s [ 91/179] Installing gnupg2-gpg-agent-0 100% | 30.0 MiB/s | 675.4 KiB | 00m00s [ 92/179] Installing systemd-standalone 100% | 20.5 MiB/s | 294.1 KiB | 00m00s [ 93/179] Installing rpm-libs-0:6.0.0-1 100% | 304.5 MiB/s | 935.3 KiB | 00m00s [ 94/179] Installing zip-0:3.0-44.fc43. 100% | 45.5 MiB/s | 698.4 KiB | 00m00s [ 95/179] Installing gnupg2-keyboxd-0:2 100% | 28.3 MiB/s | 202.7 KiB | 00m00s [ 96/179] Installing libpsl-0:0.21.5-6. 100% | 75.7 MiB/s | 77.5 KiB | 00m00s [ 97/179] Installing tar-2:1.35-6.fc43. 100% | 134.5 MiB/s | 3.0 MiB | 00m00s [ 98/179] Installing linkdupes-0:0.7.2- 100% | 54.7 MiB/s | 840.1 KiB | 00m00s [ 99/179] Installing libselinux-utils-0 100% | 21.1 MiB/s | 323.4 KiB | 00m00s [100/179] Installing liblastlog2-0:2.41 100% | 5.0 MiB/s | 35.9 KiB | 00m00s [101/179] Installing libfdisk-0:2.41.1- 100% | 124.2 MiB/s | 381.4 KiB | 00m00s [102/179] Installing util-linux-0:2.41. 100% | 94.0 MiB/s | 3.6 MiB | 00m00s [103/179] Installing policycoreutils-0: 100% | 25.7 MiB/s | 711.8 KiB | 00m00s [104/179] Installing selinux-policy-0:4 100% | 1.5 MiB/s | 33.3 KiB | 00m00s [105/179] Installing selinux-policy-tar 100% | 181.8 MiB/s | 14.9 MiB | 00m00s [106/179] Installing zstd-0:1.5.7-3.fc4 100% | 29.3 MiB/s | 509.8 KiB | 00m00s [107/179] Installing libxml2-0:2.12.10- 100% | 94.7 MiB/s | 1.7 MiB | 00m00s [108/179] Installing nettle-0:3.10.1-2. 100% | 258.4 MiB/s | 793.7 KiB | 00m00s [109/179] Installing gnutls-0:3.8.10-5. 100% | 349.4 MiB/s | 3.8 MiB | 00m00s [110/179] Installing bzip2-0:1.0.8-21.f 100% | 7.0 MiB/s | 99.8 KiB | 00m00s [111/179] Installing add-determinism-0: 100% | 121.3 MiB/s | 2.3 MiB | 00m00s [112/179] Installing build-reproducibil 100% | 1.5 MiB/s | 1.5 KiB | 00m00s [113/179] Installing cpio-0:2.15-6.fc43 100% | 64.7 MiB/s | 1.1 MiB | 00m00s [114/179] Installing ed-0:1.22.2-1.fc44 100% | 11.3 MiB/s | 150.4 KiB | 00m00s [115/179] Installing patch-0:2.8-2.fc43 100% | 16.9 MiB/s | 224.3 KiB | 00m00s [116/179] Installing lz4-libs-0:1.10.0- 100% | 158.6 MiB/s | 162.5 KiB | 00m00s [117/179] Installing libarchive-0:3.8.1 100% | 310.2 MiB/s | 953.1 KiB | 00m00s [118/179] Installing libgomp-0:15.2.1-2 100% | 264.9 MiB/s | 542.5 KiB | 00m00s [119/179] Installing libtool-ltdl-0:2.5 100% | 69.6 MiB/s | 71.2 KiB | 00m00s [120/179] Installing gdbm-libs-1:1.23-1 100% | 128.5 MiB/s | 131.6 KiB | 00m00s [121/179] Installing cyrus-sasl-lib-0:2 100% | 121.0 MiB/s | 2.3 MiB | 00m00s [122/179] Installing openldap-0:2.6.10- 100% | 216.0 MiB/s | 663.6 KiB | 00m00s [123/179] Installing gnupg2-dirmngr-0:2 100% | 27.6 MiB/s | 621.1 KiB | 00m00s [124/179] Installing gnupg2-0:2.4.8-4.f 100% | 211.3 MiB/s | 6.6 MiB | 00m00s [125/179] Installing rpm-sign-libs-0:6. 100% | 39.6 MiB/s | 40.6 KiB | 00m00s [126/179] Installing gpgverify-0:2.2-3. 100% | 0.0 B/s | 9.4 KiB | 00m00s [127/179] Installing jansson-0:2.14-3.f 100% | 88.3 MiB/s | 90.5 KiB | 00m00s [128/179] Installing libpkgconf-0:2.3.0 100% | 77.4 MiB/s | 79.2 KiB | 00m00s [129/179] Installing pkgconf-0:2.3.0-3. 100% | 6.8 MiB/s | 91.0 KiB | 00m00s [130/179] Installing pkgconf-pkg-config 100% | 147.8 KiB/s | 1.8 KiB | 00m00s [131/179] Installing xxhash-libs-0:0.8. 100% | 89.4 MiB/s | 91.6 KiB | 00m00s [132/179] Installing libbrotli-0:1.1.0- 100% | 272.0 MiB/s | 835.6 KiB | 00m00s [133/179] Installing libnghttp2-0:1.67. 100% | 159.5 MiB/s | 163.4 KiB | 00m00s [134/179] Installing keyutils-libs-0:1. 100% | 54.4 MiB/s | 55.7 KiB | 00m00s [135/179] Installing libcom_err-0:1.47. 100% | 62.7 MiB/s | 64.2 KiB | 00m00s [136/179] Installing libverto-0:0.3.2-1 100% | 26.6 MiB/s | 27.2 KiB | 00m00s [137/179] Installing krb5-libs-0:1.21.3 100% | 287.5 MiB/s | 2.3 MiB | 00m00s [138/179] Installing libssh-0:0.11.3-1. 100% | 277.9 MiB/s | 569.2 KiB | 00m00s [139/179] Installing libcurl-0:8.16.0-1 100% | 299.7 MiB/s | 920.6 KiB | 00m00s [140/179] Installing curl-0:8.16.0-1.fc 100% | 19.5 MiB/s | 478.1 KiB | 00m00s [141/179] Installing rpm-0:6.0.0-1.fc44 100% | 71.5 MiB/s | 2.6 MiB | 00m00s [142/179] Installing efi-srpm-macros-0: 100% | 40.2 MiB/s | 41.1 KiB | 00m00s [143/179] Installing java-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [144/179] Installing lua-srpm-macros-0: 100% | 0.0 B/s | 1.9 KiB | 00m00s [145/179] Installing tree-sitter-srpm-m 100% | 0.0 B/s | 9.3 KiB | 00m00s [146/179] Installing zig-srpm-macros-0: 100% | 0.0 B/s | 1.7 KiB | 00m00s [147/179] Installing filesystem-srpm-ma 100% | 0.0 B/s | 38.9 KiB | 00m00s [148/179] Installing elfutils-default-y 100% | 408.6 KiB/s | 2.0 KiB | 00m00s [149/179] Installing elfutils-libs-0:0. 100% | 223.1 MiB/s | 685.2 KiB | 00m00s [150/179] Installing elfutils-debuginfo 100% | 6.0 MiB/s | 86.2 KiB | 00m00s [151/179] Installing elfutils-0:0.193-3 100% | 139.0 MiB/s | 2.9 MiB | 00m00s [152/179] Installing binutils-0:2.45.50 100% | 311.1 MiB/s | 27.4 MiB | 00m00s [153/179] Installing gdb-minimal-0:16.3 100% | 276.2 MiB/s | 13.3 MiB | 00m00s [154/179] Installing debugedit-0:5.2-3. 100% | 15.2 MiB/s | 217.3 KiB | 00m00s [155/179] Installing rpm-build-libs-0:6 100% | 262.9 MiB/s | 269.2 KiB | 00m00s [156/179] Installing rust-srpm-macros-0 100% | 0.0 B/s | 5.6 KiB | 00m00s [157/179] Installing qt6-srpm-macros-0: 100% | 0.0 B/s | 740.0 B | 00m00s [158/179] Installing qt5-srpm-macros-0: 100% | 0.0 B/s | 776.0 B | 00m00s [159/179] Installing perl-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [160/179] Installing package-notes-srpm 100% | 0.0 B/s | 2.0 KiB | 00m00s [161/179] Installing openblas-srpm-macr 100% | 0.0 B/s | 392.0 B | 00m00s [162/179] Installing ocaml-srpm-macros- 100% | 0.0 B/s | 2.1 KiB | 00m00s [163/179] Installing kernel-srpm-macros 100% | 0.0 B/s | 2.3 KiB | 00m00s [164/179] Installing gnat-srpm-macros-0 100% | 0.0 B/s | 1.3 KiB | 00m00s [165/179] Installing ghc-srpm-macros-0: 100% | 0.0 B/s | 1.0 KiB | 00m00s [166/179] Installing gap-srpm-macros-0: 100% | 0.0 B/s | 2.7 KiB | 00m00s [167/179] Installing fpc-srpm-macros-0: 100% | 0.0 B/s | 420.0 B | 00m00s [168/179] Installing ansible-srpm-macro 100% | 0.0 B/s | 36.2 KiB | 00m00s [169/179] Installing redhat-rpm-config- 100% | 92.5 MiB/s | 189.5 KiB | 00m00s [170/179] Installing forge-srpm-macros- 100% | 0.0 B/s | 40.3 KiB | 00m00s [171/179] Installing fonts-srpm-macros- 100% | 55.7 MiB/s | 57.0 KiB | 00m00s [172/179] Installing go-srpm-macros-0:3 100% | 61.6 MiB/s | 63.0 KiB | 00m00s [173/179] Installing rpm-build-0:6.0.0- 100% | 19.3 MiB/s | 296.5 KiB | 00m00s [174/179] Installing pyproject-srpm-mac 100% | 498.4 KiB/s | 2.5 KiB | 00m00s [175/179] Installing python-srpm-macros 100% | 51.7 MiB/s | 52.9 KiB | 00m00s [176/179] Installing rpm-plugin-selinux 100% | 0.0 B/s | 13.0 KiB | 00m00s [177/179] Installing which-0:2.23-3.fc4 100% | 6.0 MiB/s | 85.7 KiB | 00m00s [178/179] Installing shadow-utils-2:4.1 100% | 136.9 MiB/s | 4.0 MiB | 00m00s [179/179] Installing info-0:7.2-6.fc43. 100% | 43.8 KiB/s | 354.3 KiB | 00m08s Complete! Finish: installing minimal buildroot with dnf5 Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: INFO: add-determinism-0.7.2-2.fc44.x86_64 alternatives-1.33-2.fc43.x86_64 ansible-srpm-macros-1-18.1.fc43.noarch audit-libs-4.1.2-2.fc44.x86_64 bash-5.3.0-2.fc43.x86_64 binutils-2.45.50-4.fc44.x86_64 build-reproducibility-srpm-macros-0.7.2-2.fc44.noarch bzip2-1.0.8-21.fc43.x86_64 bzip2-libs-1.0.8-21.fc43.x86_64 ca-certificates-2025.2.80_v9.0.304-2.fc44.noarch coreutils-9.8-3.fc44.x86_64 coreutils-common-9.8-3.fc44.x86_64 cpio-2.15-6.fc43.x86_64 crypto-policies-20250714-5.gitcd6043a.fc44.noarch curl-8.16.0-1.fc44.x86_64 cyrus-sasl-lib-2.1.28-33.fc44.x86_64 debugedit-5.2-3.fc44.x86_64 diffutils-3.12-3.fc43.x86_64 dwz-0.16-2.fc43.x86_64 ed-1.22.2-1.fc44.x86_64 efi-srpm-macros-6-4.fc43.noarch elfutils-0.193-3.fc43.x86_64 elfutils-debuginfod-client-0.193-3.fc43.x86_64 elfutils-default-yama-scope-0.193-3.fc43.noarch elfutils-libelf-0.193-3.fc43.x86_64 elfutils-libs-0.193-3.fc43.x86_64 fedora-gpg-keys-44-0.1.noarch fedora-release-44-0.3.noarch fedora-release-common-44-0.3.noarch fedora-release-identity-basic-44-0.3.noarch fedora-repos-44-0.1.noarch fedora-repos-rawhide-44-0.1.noarch file-5.46-8.fc44.x86_64 file-libs-5.46-8.fc44.x86_64 filesystem-3.18-50.fc43.x86_64 filesystem-srpm-macros-3.18-50.fc43.noarch findutils-4.10.0-6.fc43.x86_64 fonts-srpm-macros-5.0.0-1.fc44.noarch forge-srpm-macros-0.4.0-3.fc43.noarch fpc-srpm-macros-1.3-15.fc43.noarch gap-srpm-macros-2-1.fc44.noarch gawk-5.3.2-2.fc43.x86_64 gdb-minimal-16.3-6.fc44.x86_64 gdbm-libs-1.23-10.fc43.x86_64 ghc-srpm-macros-1.9.2-3.fc43.noarch glibc-2.42.9000-5.fc44.x86_64 glibc-common-2.42.9000-5.fc44.x86_64 glibc-gconv-extra-2.42.9000-5.fc44.x86_64 glibc-minimal-langpack-2.42.9000-5.fc44.x86_64 gmp-6.3.0-4.fc44.x86_64 gnat-srpm-macros-6-8.fc43.noarch gnulib-l10n-20241231-1.fc44.noarch gnupg2-2.4.8-4.fc43.x86_64 gnupg2-dirmngr-2.4.8-4.fc43.x86_64 gnupg2-gpg-agent-2.4.8-4.fc43.x86_64 gnupg2-gpgconf-2.4.8-4.fc43.x86_64 gnupg2-keyboxd-2.4.8-4.fc43.x86_64 gnupg2-verify-2.4.8-4.fc43.x86_64 gnutls-3.8.10-5.fc44.x86_64 go-srpm-macros-3.8.0-1.fc44.noarch gpg-pubkey-36f612dcf27f7d1a48a835e4dbfcf71c6d9f90a6-6786af3b gpg-pubkey-4f50a6114cd5c6976a7f1179655a4b02f577861e-6888bc98 gpg-pubkey-c6e7f081cf80e13146676e88829b606631645531-66b6dccf gpgverify-2.2-3.fc43.noarch grep-3.12-2.fc43.x86_64 gzip-1.14-1.fc44.x86_64 ima-evm-utils-libs-1.6.2-6.fc43.x86_64 info-7.2-6.fc43.x86_64 jansson-2.14-3.fc43.x86_64 java-srpm-macros-1-7.fc43.noarch json-c-0.18-7.fc43.x86_64 kernel-srpm-macros-1.0-27.fc43.noarch keyutils-libs-1.6.3-6.fc43.x86_64 krb5-libs-1.21.3-8.fc44.x86_64 libacl-2.3.2-4.fc43.x86_64 libarchive-3.8.1-3.fc43.x86_64 libassuan-2.5.7-4.fc43.x86_64 libattr-2.5.2-6.fc43.x86_64 libblkid-2.41.1-17.fc44.x86_64 libbrotli-1.1.0-10.fc44.x86_64 libcap-2.76-3.fc44.x86_64 libcap-ng-0.8.5-8.fc44.x86_64 libcom_err-1.47.3-2.fc43.x86_64 libcurl-8.16.0-1.fc44.x86_64 libeconf-0.7.9-2.fc43.x86_64 libevent-2.1.12-16.fc43.x86_64 libfdisk-2.41.1-17.fc44.x86_64 libffi-3.5.2-1.fc44.x86_64 libfsverity-1.6-3.fc43.x86_64 libgcc-15.2.1-2.fc44.x86_64 libgcrypt-1.11.1-2.fc43.x86_64 libgomp-15.2.1-2.fc44.x86_64 libgpg-error-1.55-2.fc43.x86_64 libidn2-2.3.8-2.fc43.x86_64 libksba-1.6.7-4.fc43.x86_64 liblastlog2-2.41.1-17.fc44.x86_64 libmount-2.41.1-17.fc44.x86_64 libnghttp2-1.67.1-1.fc44.x86_64 libpkgconf-2.3.0-3.fc43.x86_64 libpsl-0.21.5-6.fc43.x86_64 libselinux-3.9-5.fc44.x86_64 libselinux-utils-3.9-5.fc44.x86_64 libsemanage-3.9-4.fc44.x86_64 libsepol-3.9-2.fc43.x86_64 libsmartcols-2.41.1-17.fc44.x86_64 libssh-0.11.3-1.fc44.x86_64 libssh-config-0.11.3-1.fc44.noarch libstdc++-15.2.1-2.fc44.x86_64 libtasn1-4.20.0-2.fc43.x86_64 libtool-ltdl-2.5.4-7.fc43.x86_64 libunistring-1.1-10.fc43.x86_64 libusb1-1.0.29-4.fc44.x86_64 libuuid-2.41.1-17.fc44.x86_64 libverto-0.3.2-11.fc43.x86_64 libxcrypt-4.4.38-9.fc44.x86_64 libxml2-2.12.10-5.fc44.x86_64 libzstd-1.5.7-3.fc44.x86_64 linkdupes-0.7.2-2.fc44.x86_64 lua-libs-5.4.8-2.fc43.x86_64 lua-srpm-macros-1-16.fc43.noarch lz4-libs-1.10.0-3.fc43.x86_64 mpfr-4.2.2-2.fc43.x86_64 ncurses-base-6.5-7.20250614.fc43.noarch ncurses-libs-6.5-7.20250614.fc43.x86_64 nettle-3.10.1-2.fc43.x86_64 npth-1.8-3.fc43.x86_64 ocaml-srpm-macros-11-2.fc43.noarch openblas-srpm-macros-2-20.fc43.noarch openldap-2.6.10-4.fc44.x86_64 openssl-libs-3.5.1-3.fc44.x86_64 p11-kit-0.25.8-1.fc44.x86_64 p11-kit-trust-0.25.8-1.fc44.x86_64 package-notes-srpm-macros-0.5-14.fc43.noarch pam-libs-1.7.1-3.fc43.x86_64 patch-2.8-2.fc43.x86_64 pcre2-10.46-1.fc44.x86_64 pcre2-syntax-10.46-1.fc44.noarch perl-srpm-macros-1-60.fc43.noarch pkgconf-2.3.0-3.fc43.x86_64 pkgconf-m4-2.3.0-3.fc43.noarch pkgconf-pkg-config-2.3.0-3.fc43.x86_64 policycoreutils-3.9-5.fc44.x86_64 popt-1.19-9.fc43.x86_64 publicsuffix-list-dafsa-20250616-2.fc43.noarch pyproject-srpm-macros-1.18.4-1.fc44.noarch python-srpm-macros-3.14-8.fc44.noarch qt5-srpm-macros-5.15.17-2.fc43.noarch qt6-srpm-macros-6.9.2-1.fc44.noarch readline-8.3-2.fc43.x86_64 redhat-rpm-config-343-14.fc44.noarch rpm-6.0.0-1.fc44.x86_64 rpm-build-6.0.0-1.fc44.x86_64 rpm-build-libs-6.0.0-1.fc44.x86_64 rpm-libs-6.0.0-1.fc44.x86_64 rpm-plugin-selinux-6.0.0-1.fc44.x86_64 rpm-sequoia-1.9.0-2.fc43.x86_64 rpm-sign-libs-6.0.0-1.fc44.x86_64 rust-srpm-macros-26.4-1.fc44.noarch sed-4.9-5.fc43.x86_64 selinux-policy-42.11-1.fc44.noarch selinux-policy-targeted-42.11-1.fc44.noarch setup-2.15.0-26.fc43.noarch shadow-utils-4.18.0-3.fc43.x86_64 sqlite-libs-3.50.4-1.fc44.x86_64 systemd-libs-258-1.fc44.x86_64 systemd-standalone-sysusers-258-1.fc44.x86_64 tar-1.35-6.fc43.x86_64 tpm2-tss-4.1.3-8.fc43.x86_64 tree-sitter-srpm-macros-0.4.2-1.fc43.noarch unzip-6.0-68.fc44.x86_64 util-linux-2.41.1-17.fc44.x86_64 util-linux-core-2.41.1-17.fc44.x86_64 which-2.23-3.fc43.x86_64 xxhash-libs-0.8.3-3.fc43.x86_64 xz-5.8.1-2.fc43.x86_64 xz-libs-5.8.1-2.fc43.x86_64 zig-srpm-macros-1-5.fc43.noarch zip-3.0-44.fc43.x86_64 zlib-ng-compat-2.2.5-2.fc44.x86_64 zstd-1.5.7-3.fc44.x86_64 Start: buildsrpm Start: rpmbuild -bs Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc44.src.rpm Finish: rpmbuild -bs INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-rawhide-x86_64-1759552642.238252/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names Finish: buildsrpm INFO: Done(/var/lib/copr-rpmbuild/workspace/workdir-dldcc_aw/ollama/ollama.spec) Config(child) 0 minutes 26 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot INFO: Start(/var/lib/copr-rpmbuild/results/ollama-0.12.3-1.fc44.src.rpm) Config(fedora-rawhide-x86_64) Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-rawhide-x86_64-bootstrap-1759552642.238252/root. INFO: reusing tmpfs at /var/lib/mock/fedora-rawhide-x86_64-bootstrap-1759552642.238252/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-rawhide-x86_64-1759552642.238252/root. INFO: calling preinit hooks INFO: enabled root cache Start: unpacking root cache Finish: unpacking root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-6.0.0-1.fc44.x86_64 rpm-sequoia-1.9.0-2.fc43.x86_64 dnf5-5.2.17.0-2.fc44.x86_64 dnf5-plugins-5.2.17.0-2.fc44.x86_64 Finish: chroot init Start: build phase for ollama-0.12.3-1.fc44.src.rpm Start: build setup for ollama-0.12.3-1.fc44.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc44.src.rpm Updating and loading repositories: Additional repo https_developer_downlo 100% | 20.1 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 20.1 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 7.7 KiB/s | 1.5 KiB | 00m00s fedora 100% | 93.9 KiB/s | 29.1 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: cmake x86_64 3.31.6-4.fc43 fedora 34.5 MiB gcc-c++ x86_64 15.2.1-2.fc44 fedora 41.4 MiB go-rpm-macros x86_64 3.8.0-1.fc44 fedora 96.6 KiB go-vendor-tools noarch 0.9.0-1.fc44 fedora 371.3 KiB hipblas-devel x86_64 7.0.1-1.fc44 fedora 2.4 MiB rocblas-devel x86_64 7.0.1-1.fc44 fedora 2.7 MiB rocm-comgr-devel x86_64 20-2.rocm7.0.0.fc44 fedora 100.5 KiB rocm-hip-devel x86_64 7.0.1-1.fc44 fedora 3.0 MiB rocm-runtime-devel x86_64 7.0.1-1.fc44 fedora 678.5 KiB rocminfo x86_64 7.0.1-1.fc44 fedora 77.6 KiB systemd-rpm-macros noarch 258-1.fc44 fedora 8.5 KiB Installing dependencies: annobin-docs noarch 12.99-1.fc43 fedora 98.9 KiB annobin-plugin-gcc x86_64 12.99-1.fc43 fedora 1.0 MiB cmake-data noarch 3.31.6-4.fc43 fedora 8.5 MiB cmake-filesystem x86_64 3.31.6-4.fc43 fedora 0.0 B cmake-rpm-macros noarch 3.31.6-4.fc43 fedora 7.7 KiB cpp x86_64 15.2.1-2.fc44 fedora 37.9 MiB emacs-filesystem noarch 1:30.0-5.fc43 fedora 0.0 B expat x86_64 2.7.2-1.fc44 fedora 298.6 KiB gcc x86_64 15.2.1-2.fc44 fedora 111.9 MiB gcc-plugin-annobin x86_64 15.2.1-2.fc44 fedora 57.1 KiB git x86_64 2.51.0-2.fc44 fedora 56.4 KiB git-core x86_64 2.51.0-2.fc44 fedora 23.6 MiB git-core-doc noarch 2.51.0-2.fc44 fedora 17.7 MiB glibc-devel x86_64 2.42.9000-5.fc44 fedora 2.3 MiB go-filesystem x86_64 3.8.0-1.fc44 fedora 0.0 B golang x86_64 1.25.1-1.fc44 fedora 9.6 MiB golang-bin x86_64 1.25.1-1.fc44 fedora 67.2 MiB golang-src noarch 1.25.1-1.fc44 fedora 81.4 MiB golist x86_64 0.10.4-8.fc44 fedora 4.5 MiB groff-base x86_64 1.23.0-11.fc44 fedora 3.8 MiB hipblas x86_64 7.0.1-1.fc44 fedora 799.6 KiB hipblas-common-devel noarch 7.0.1-1.fc44 fedora 16.8 KiB hipcc x86_64 20-2.rocm7.0.0.fc44 fedora 634.5 KiB hwdata noarch 0.399-1.fc44 fedora 9.6 MiB jsoncpp x86_64 1.9.6-2.fc43 fedora 257.6 KiB kernel-headers x86_64 6.17.0-63.fc44 fedora 6.7 MiB kmod x86_64 34.2-3.fc44 fedora 247.2 KiB less x86_64 679-4.fc44 fedora 407.0 KiB libcbor x86_64 0.12.0-6.fc43 fedora 77.8 KiB libdrm x86_64 2.4.125-3.fc44 fedora 395.8 KiB libedit x86_64 3.1-56.20250104cvs.fc43 fedora 240.1 KiB libfido2 x86_64 1.16.0-3.fc43 fedora 238.5 KiB libmpc x86_64 1.3.1-8.fc43 fedora 160.6 KiB libpciaccess x86_64 0.16-16.fc43 fedora 44.5 KiB libstdc++-devel x86_64 15.2.1-2.fc44 fedora 37.3 MiB libuv x86_64 1:1.51.0-2.fc43 fedora 570.2 KiB libxcrypt-devel x86_64 4.4.38-9.fc44 fedora 30.8 KiB make x86_64 1:4.4.1-11.fc43 fedora 1.8 MiB mpdecimal x86_64 4.0.1-2.fc43 fedora 217.2 KiB ncurses x86_64 6.5-7.20250614.fc43 fedora 609.8 KiB numactl-libs x86_64 2.0.19-3.fc43 fedora 56.9 KiB openssh x86_64 10.0p1-7.fc44 fedora 1.4 MiB openssh-clients x86_64 10.0p1-7.fc44 fedora 2.6 MiB perl-AutoLoader noarch 5.74-520.fc43 fedora 20.6 KiB perl-B x86_64 1.89-520.fc43 fedora 501.3 KiB perl-Carp noarch 1.54-520.fc43 fedora 46.6 KiB perl-Class-Struct noarch 0.68-520.fc43 fedora 25.4 KiB perl-Data-Dumper x86_64 2.191-521.fc43 fedora 115.6 KiB perl-Digest noarch 1.20-520.fc43 fedora 35.3 KiB perl-Digest-MD5 x86_64 2.59-520.fc43 fedora 59.7 KiB perl-DynaLoader x86_64 1.57-520.fc43 fedora 32.1 KiB perl-Encode x86_64 4:3.21-520.fc43 fedora 4.7 MiB perl-Errno x86_64 1.38-520.fc43 fedora 8.4 KiB perl-Error noarch 1:0.17030-2.fc43 fedora 76.7 KiB perl-Exporter noarch 5.79-520.fc43 fedora 54.3 KiB perl-Fcntl x86_64 1.20-520.fc43 fedora 48.8 KiB perl-File-Basename noarch 2.86-520.fc43 fedora 14.0 KiB perl-File-Copy noarch 2.41-520.fc43 fedora 19.7 KiB perl-File-Path noarch 2.18-520.fc43 fedora 63.5 KiB perl-File-Temp noarch 1:0.231.200-1.fc44 fedora 163.7 KiB perl-File-Which noarch 1.27-14.fc43 fedora 30.4 KiB perl-File-stat noarch 1.14-520.fc43 fedora 12.5 KiB perl-FileHandle noarch 2.05-520.fc43 fedora 9.4 KiB perl-Getopt-Long noarch 1:2.58-520.fc43 fedora 144.5 KiB perl-Getopt-Std noarch 1.14-520.fc43 fedora 11.2 KiB perl-Git noarch 2.51.0-2.fc44 fedora 64.4 KiB perl-HTTP-Tiny noarch 0.090-521.fc43 fedora 154.4 KiB perl-IO x86_64 1.55-520.fc43 fedora 147.4 KiB perl-IO-Socket-IP noarch 0.43-521.fc43 fedora 100.3 KiB perl-IO-Socket-SSL noarch 2.095-2.fc43 fedora 714.5 KiB perl-IPC-Open3 noarch 1.24-520.fc43 fedora 27.7 KiB perl-MIME-Base32 noarch 1.303-24.fc43 fedora 30.7 KiB perl-MIME-Base64 x86_64 3.16-520.fc43 fedora 42.0 KiB perl-Net-SSLeay x86_64 1.94-11.fc43 fedora 1.3 MiB perl-POSIX x86_64 2.23-520.fc43 fedora 231.4 KiB perl-PathTools x86_64 3.94-520.fc43 fedora 180.0 KiB perl-Pod-Escapes noarch 1:1.07-520.fc43 fedora 24.9 KiB perl-Pod-Perldoc noarch 3.28.01-521.fc43 fedora 163.7 KiB perl-Pod-Simple noarch 1:3.47-3.fc43 fedora 565.3 KiB perl-Pod-Usage noarch 4:2.05-520.fc43 fedora 86.3 KiB perl-Scalar-List-Utils x86_64 5:1.70-1.fc43 fedora 144.9 KiB perl-SelectSaver noarch 1.02-520.fc43 fedora 2.2 KiB perl-Socket x86_64 4:2.040-2.fc43 fedora 120.3 KiB perl-Storable x86_64 1:3.37-521.fc43 fedora 231.2 KiB perl-Symbol noarch 1.09-520.fc43 fedora 6.8 KiB perl-Term-ANSIColor noarch 5.01-521.fc43 fedora 97.5 KiB perl-Term-Cap noarch 1.18-520.fc43 fedora 29.3 KiB perl-TermReadKey x86_64 2.38-26.fc43 fedora 64.0 KiB perl-Text-ParseWords noarch 3.31-520.fc43 fedora 13.6 KiB perl-Text-Tabs+Wrap noarch 2024.001-520.fc43 fedora 22.6 KiB perl-Time-Local noarch 2:1.350-520.fc43 fedora 69.0 KiB perl-URI noarch 5.34-1.fc44 fedora 268.0 KiB perl-base noarch 2.27-520.fc43 fedora 12.6 KiB perl-constant noarch 1.33-521.fc43 fedora 26.2 KiB perl-if noarch 0.61.000-520.fc43 fedora 5.8 KiB perl-interpreter x86_64 4:5.42.0-520.fc43 fedora 118.6 KiB perl-lib x86_64 0.65-520.fc43 fedora 8.5 KiB perl-libnet noarch 3.15-521.fc43 fedora 289.4 KiB perl-libs x86_64 4:5.42.0-520.fc43 fedora 11.5 MiB perl-locale noarch 1.13-520.fc43 fedora 6.1 KiB perl-mro x86_64 1.29-520.fc43 fedora 41.6 KiB perl-overload noarch 1.40-520.fc43 fedora 71.6 KiB perl-overloading noarch 0.02-520.fc43 fedora 4.9 KiB perl-parent noarch 1:0.244-520.fc43 fedora 10.3 KiB perl-podlators noarch 1:6.0.2-520.fc43 fedora 317.5 KiB perl-vars noarch 1.05-520.fc43 fedora 3.9 KiB python-pip-wheel noarch 25.2-4.fc44 fedora 1.2 MiB python3 x86_64 3.14.0~rc3-1.fc44 fedora 28.9 KiB python3-boolean.py noarch 5.0-9.fc44 fedora 635.8 KiB python3-libs x86_64 3.14.0~rc3-1.fc44 fedora 43.0 MiB python3-license-expression noarch 30.4.4-3.fc44 fedora 1.2 MiB rhash x86_64 1.4.5-3.fc43 fedora 351.1 KiB rocblas x86_64 7.0.1-1.fc44 fedora 946.2 MiB rocm-clang x86_64 20-2.rocm7.0.0.fc44 fedora 68.5 MiB rocm-clang-devel x86_64 20-2.rocm7.0.0.fc44 fedora 26.1 MiB rocm-clang-libs x86_64 20-2.rocm7.0.0.fc44 fedora 94.1 MiB rocm-clang-runtime-devel x86_64 20-2.rocm7.0.0.fc44 fedora 8.4 MiB rocm-comgr x86_64 20-2.rocm7.0.0.fc44 fedora 126.3 MiB rocm-device-libs x86_64 20-2.rocm7.0.0.fc44 fedora 3.2 MiB rocm-hip x86_64 7.0.1-1.fc44 fedora 26.7 MiB rocm-libc++ x86_64 20-2.rocm7.0.0.fc44 fedora 1.3 MiB rocm-libc++-devel x86_64 20-2.rocm7.0.0.fc44 fedora 15.0 MiB rocm-lld x86_64 20-2.rocm7.0.0.fc44 fedora 5.9 MiB rocm-llvm x86_64 20-2.rocm7.0.0.fc44 fedora 52.5 MiB rocm-llvm-devel x86_64 20-2.rocm7.0.0.fc44 fedora 28.3 MiB rocm-llvm-filesystem x86_64 20-2.rocm7.0.0.fc44 fedora 0.0 B rocm-llvm-libs x86_64 20-2.rocm7.0.0.fc44 fedora 91.6 MiB rocm-llvm-static x86_64 20-2.rocm7.0.0.fc44 fedora 1.9 GiB rocm-runtime x86_64 7.0.1-1.fc44 fedora 3.2 MiB rocsolver x86_64 7.0.1-1.fc44 fedora 883.3 MiB tzdata noarch 2025b-3.fc43 fedora 1.6 MiB vim-filesystem noarch 2:9.1.1775-2.fc44 fedora 40.0 B zlib-ng-compat-devel x86_64 2.2.5-2.fc44 fedora 107.0 KiB Transaction Summary: Installing: 144 packages Total size of inbound packages is 2 GiB. Need to download 2 GiB. After this operation, 5 GiB extra will be used (install 5 GiB, remove 0 B). [ 1/144] go-rpm-macros-0:3.8.0-1.fc44. 100% | 2.7 MiB/s | 38.4 KiB | 00m00s [ 2/144] go-vendor-tools-0:0.9.0-1.fc4 100% | 9.3 MiB/s | 142.7 KiB | 00m00s [ 3/144] hipblas-devel-0:7.0.1-1.fc44. 100% | 5.4 MiB/s | 83.6 KiB | 00m00s [ 4/144] rocm-comgr-devel-0:20-2.rocm7 100% | 10.5 MiB/s | 32.4 KiB | 00m00s [ 5/144] rocblas-devel-0:7.0.1-1.fc44. 100% | 19.9 MiB/s | 101.8 KiB | 00m00s [ 6/144] rocm-hip-devel-0:7.0.1-1.fc44 100% | 38.2 MiB/s | 273.8 KiB | 00m00s [ 7/144] rocminfo-0:7.0.1-1.fc44.x86_6 100% | 12.4 MiB/s | 37.9 KiB | 00m00s [ 8/144] rocm-runtime-devel-0:7.0.1-1. 100% | 23.5 MiB/s | 120.2 KiB | 00m00s [ 9/144] systemd-rpm-macros-0:258-1.fc 100% | 1.8 MiB/s | 14.5 KiB | 00m00s [ 10/144] go-filesystem-0:3.8.0-1.fc44. 100% | 1.4 MiB/s | 8.9 KiB | 00m00s [ 11/144] golang-0:1.25.1-1.fc44.x86_64 100% | 51.9 MiB/s | 1.2 MiB | 00m00s [ 12/144] golist-0:0.10.4-8.fc44.x86_64 100% | 95.3 MiB/s | 1.6 MiB | 00m00s [ 13/144] python3-license-expression-0: 100% | 15.2 MiB/s | 139.9 KiB | 00m00s [ 14/144] cmake-0:3.31.6-4.fc43.x86_64 100% | 145.6 MiB/s | 12.2 MiB | 00m00s [ 15/144] cmake-filesystem-0:3.31.6-4.f 100% | 860.5 KiB/s | 15.5 KiB | 00m00s [ 16/144] gcc-c++-0:15.2.1-2.fc44.x86_6 100% | 151.1 MiB/s | 15.3 MiB | 00m00s [ 17/144] hipblas-common-devel-0:7.0.1- 100% | 784.8 KiB/s | 13.3 KiB | 00m00s [ 18/144] hipblas-0:7.0.1-1.fc44.x86_64 100% | 6.0 MiB/s | 116.5 KiB | 00m00s [ 19/144] rocm-device-libs-0:20-2.rocm7 100% | 5.0 MiB/s | 503.7 KiB | 00m00s [ 20/144] perl-File-Basename-0:2.86-520 100% | 1.5 MiB/s | 17.2 KiB | 00m00s [ 21/144] perl-File-Copy-0:2.41-520.fc4 100% | 2.8 MiB/s | 20.1 KiB | 00m00s [ 22/144] perl-File-Which-0:1.27-14.fc4 100% | 2.3 MiB/s | 21.4 KiB | 00m00s [ 23/144] perl-Getopt-Std-0:1.14-520.fc 100% | 1.7 MiB/s | 15.7 KiB | 00m00s [ 24/144] perl-PathTools-0:3.94-520.fc4 100% | 8.5 MiB/s | 87.2 KiB | 00m00s [ 25/144] perl-Scalar-List-Utils-5:1.70 100% | 7.3 MiB/s | 75.0 KiB | 00m00s [ 26/144] perl-URI-0:5.34-1.fc44.noarch 100% | 13.2 MiB/s | 149.1 KiB | 00m00s [ 27/144] perl-interpreter-4:5.42.0-520 100% | 8.8 MiB/s | 72.4 KiB | 00m00s [ 28/144] rocm-comgr-0:20-2.rocm7.0.0.f 100% | 138.8 MiB/s | 31.2 MiB | 00m00s [ 29/144] rocm-hip-0:7.0.1-1.fc44.x86_6 100% | 92.2 MiB/s | 10.1 MiB | 00m00s [ 30/144] kmod-0:34.2-3.fc44.x86_64 100% | 13.0 MiB/s | 132.8 KiB | 00m00s [ 31/144] cmake-data-0:3.31.6-4.fc43.no 100% | 137.1 MiB/s | 2.5 MiB | 00m00s [ 32/144] rocm-runtime-0:7.0.1-1.fc44.x 100% | 6.5 MiB/s | 638.0 KiB | 00m00s [ 33/144] expat-0:2.7.2-1.fc44.x86_64 100% | 23.2 MiB/s | 119.0 KiB | 00m00s [ 34/144] jsoncpp-0:1.9.6-2.fc43.x86_64 100% | 19.7 MiB/s | 101.1 KiB | 00m00s [ 35/144] libuv-1:1.51.0-2.fc43.x86_64 100% | 43.3 MiB/s | 266.1 KiB | 00m00s [ 36/144] make-1:4.4.1-11.fc43.x86_64 100% | 95.2 MiB/s | 585.2 KiB | 00m00s [ 37/144] rhash-0:1.4.5-3.fc43.x86_64 100% | 38.7 MiB/s | 197.9 KiB | 00m00s [ 38/144] libmpc-0:1.3.1-8.fc43.x86_64 100% | 7.6 MiB/s | 70.4 KiB | 00m00s [ 39/144] golang-bin-0:1.25.1-1.fc44.x8 100% | 96.3 MiB/s | 17.7 MiB | 00m00s [ 40/144] gcc-0:15.2.1-2.fc44.x86_64 100% | 120.0 MiB/s | 39.7 MiB | 00m00s [ 41/144] golang-src-0:1.25.1-1.fc44.no 100% | 89.2 MiB/s | 13.6 MiB | 00m00s [ 42/144] python3-boolean.py-0:5.0-9.fc 100% | 2.9 MiB/s | 123.5 KiB | 00m00s [ 43/144] rocm-clang-devel-0:20-2.rocm7 100% | 12.0 MiB/s | 2.7 MiB | 00m00s [ 44/144] rocm-lld-0:20-2.rocm7.0.0.fc4 100% | 82.0 MiB/s | 1.6 MiB | 00m00s [ 45/144] rocblas-0:7.0.1-1.fc44.x86_64 100% | 133.2 MiB/s | 274.8 MiB | 00m02s [ 46/144] perl-Carp-0:1.54-520.fc43.noa 100% | 585.9 KiB/s | 28.7 KiB | 00m00s [ 47/144] perl-Exporter-0:5.79-520.fc43 100% | 1.5 MiB/s | 30.9 KiB | 00m00s [ 48/144] perl-overload-0:1.40-520.fc43 100% | 4.9 MiB/s | 45.6 KiB | 00m00s [ 49/144] perl-base-0:2.27-520.fc43.noa 100% | 2.0 MiB/s | 16.2 KiB | 00m00s [ 50/144] perl-constant-0:1.33-521.fc43 100% | 2.0 MiB/s | 22.8 KiB | 00m00s [ 51/144] perl-Errno-0:1.38-520.fc43.x8 100% | 1.2 MiB/s | 14.9 KiB | 00m00s [ 52/144] perl-libs-4:5.42.0-520.fc43.x 100% | 56.9 MiB/s | 2.6 MiB | 00m00s [ 53/144] perl-Data-Dumper-0:2.191-521. 100% | 3.9 MiB/s | 56.3 KiB | 00m00s [ 54/144] perl-MIME-Base32-0:1.303-24.f 100% | 1.8 MiB/s | 20.4 KiB | 00m00s [ 55/144] perl-MIME-Base64-0:3.16-520.f 100% | 2.4 MiB/s | 29.7 KiB | 00m00s [ 56/144] perl-libnet-0:3.15-521.fc43.n 100% | 15.7 MiB/s | 128.3 KiB | 00m00s [ 57/144] perl-parent-1:0.244-520.fc43. 100% | 1.3 MiB/s | 14.8 KiB | 00m00s [ 58/144] hipcc-0:20-2.rocm7.0.0.fc44.x 100% | 14.4 MiB/s | 132.4 KiB | 00m00s [ 59/144] numactl-libs-0:2.0.19-3.fc43. 100% | 5.1 MiB/s | 31.1 KiB | 00m00s [ 60/144] libdrm-0:2.4.125-3.fc44.x86_6 100% | 3.1 MiB/s | 161.5 KiB | 00m00s [ 61/144] emacs-filesystem-1:30.0-5.fc4 100% | 1.0 MiB/s | 7.5 KiB | 00m00s [ 62/144] vim-filesystem-2:9.1.1775-2.f 100% | 1.7 MiB/s | 15.5 KiB | 00m00s [ 63/144] cpp-0:15.2.1-2.fc44.x86_64 100% | 105.9 MiB/s | 12.9 MiB | 00m00s [ 64/144] rocm-clang-0:20-2.rocm7.0.0.f 100% | 42.1 MiB/s | 15.9 MiB | 00m00s [ 65/144] rocm-clang-libs-0:20-2.rocm7. 100% | 101.2 MiB/s | 23.1 MiB | 00m00s [ 66/144] rocm-llvm-libs-0:20-2.rocm7.0 100% | 88.6 MiB/s | 21.2 MiB | 00m00s [ 67/144] rocm-llvm-static-0:20-2.rocm7 100% | 101.3 MiB/s | 281.9 MiB | 00m03s [ 68/144] rocm-llvm-devel-0:20-2.rocm7. 100% | 9.8 MiB/s | 4.3 MiB | 00m00s [ 69/144] perl-mro-0:1.29-520.fc43.x86_ 100% | 664.0 KiB/s | 29.9 KiB | 00m00s [ 70/144] perl-overloading-0:0.02-520.f 100% | 516.4 KiB/s | 12.9 KiB | 00m00s [ 71/144] perl-DynaLoader-0:1.57-520.fc 100% | 1.0 MiB/s | 26.0 KiB | 00m00s [ 72/144] perl-Digest-MD5-0:2.59-520.fc 100% | 2.9 MiB/s | 35.8 KiB | 00m00s [ 73/144] perl-B-0:1.89-520.fc43.x86_64 100% | 12.4 MiB/s | 177.7 KiB | 00m00s [ 74/144] perl-FileHandle-0:2.05-520.fc 100% | 3.8 MiB/s | 15.5 KiB | 00m00s [ 75/144] perl-Fcntl-0:1.20-520.fc43.x8 100% | 4.8 MiB/s | 29.8 KiB | 00m00s [ 76/144] perl-IO-Socket-IP-0:0.43-521. 100% | 10.3 MiB/s | 42.1 KiB | 00m00s [ 77/144] perl-IO-0:1.55-520.fc43.x86_6 100% | 16.1 MiB/s | 82.2 KiB | 00m00s [ 78/144] perl-POSIX-0:2.23-520.fc43.x8 100% | 19.1 MiB/s | 97.8 KiB | 00m00s [ 79/144] perl-Socket-4:2.040-2.fc43.x8 100% | 10.7 MiB/s | 54.9 KiB | 00m00s [ 80/144] perl-Symbol-0:1.09-520.fc43.n 100% | 2.8 MiB/s | 14.2 KiB | 00m00s [ 81/144] perl-Time-Local-2:1.350-520.f 100% | 6.7 MiB/s | 34.4 KiB | 00m00s [ 82/144] libpciaccess-0:0.16-16.fc43.x 100% | 8.5 MiB/s | 26.2 KiB | 00m00s [ 83/144] git-0:2.51.0-2.fc44.x86_64 100% | 13.4 MiB/s | 41.1 KiB | 00m00s [ 84/144] rocm-clang-runtime-devel-0:20 100% | 11.4 MiB/s | 678.8 KiB | 00m00s [ 85/144] rocm-libc++-0:20-2.rocm7.0.0. 100% | 45.5 MiB/s | 373.0 KiB | 00m00s [ 86/144] rocm-llvm-filesystem-0:20-2.r 100% | 6.1 MiB/s | 24.9 KiB | 00m00s [ 87/144] rocm-libc++-devel-0:20-2.rocm 100% | 13.2 MiB/s | 1.5 MiB | 00m00s [ 88/144] perl-vars-0:1.05-520.fc43.noa 100% | 1.4 MiB/s | 13.0 KiB | 00m00s [ 89/144] perl-if-0:0.61.000-520.fc43.n 100% | 2.0 MiB/s | 14.0 KiB | 00m00s [ 90/144] perl-Digest-0:1.20-520.fc43.n 100% | 6.1 MiB/s | 24.8 KiB | 00m00s [ 91/144] perl-File-stat-0:1.14-520.fc4 100% | 3.3 MiB/s | 17.1 KiB | 00m00s [ 92/144] perl-SelectSaver-0:1.02-520.f 100% | 2.3 MiB/s | 11.7 KiB | 00m00s [ 93/144] perl-locale-0:1.13-520.fc43.n 100% | 2.2 MiB/s | 13.5 KiB | 00m00s [ 94/144] hwdata-0:0.399-1.fc44.noarch 100% | 82.8 MiB/s | 1.7 MiB | 00m00s [ 95/144] git-core-0:2.51.0-2.fc44.x86_ 100% | 185.6 MiB/s | 5.0 MiB | 00m00s [ 96/144] git-core-doc-0:2.51.0-2.fc44. 100% | 144.3 MiB/s | 3.0 MiB | 00m00s [ 97/144] perl-Getopt-Long-1:2.58-520.f 100% | 8.9 MiB/s | 63.6 KiB | 00m00s [ 98/144] perl-Git-0:2.51.0-2.fc44.noar 100% | 9.3 MiB/s | 38.1 KiB | 00m00s [ 99/144] perl-IPC-Open3-0:1.24-520.fc4 100% | 7.8 MiB/s | 23.9 KiB | 00m00s [100/144] perl-TermReadKey-0:2.38-26.fc 100% | 17.2 MiB/s | 35.2 KiB | 00m00s [101/144] perl-lib-0:0.65-520.fc43.x86_ 100% | 4.9 MiB/s | 15.0 KiB | 00m00s [102/144] perl-Class-Struct-0:0.68-520. 100% | 3.6 MiB/s | 22.1 KiB | 00m00s [103/144] less-0:679-4.fc44.x86_64 100% | 38.4 MiB/s | 196.8 KiB | 00m00s [104/144] openssh-clients-0:10.0p1-7.fc 100% | 104.2 MiB/s | 746.8 KiB | 00m00s [105/144] perl-Pod-Usage-4:2.05-520.fc4 100% | 9.9 MiB/s | 40.5 KiB | 00m00s [106/144] perl-Text-ParseWords-0:3.31-5 100% | 3.2 MiB/s | 16.3 KiB | 00m00s [107/144] perl-Error-1:0.17030-2.fc43.n 100% | 9.8 MiB/s | 40.2 KiB | 00m00s [108/144] libedit-0:3.1-56.20250104cvs. 100% | 14.7 MiB/s | 105.2 KiB | 00m00s [109/144] libfido2-0:1.16.0-3.fc43.x86_ 100% | 9.6 MiB/s | 98.5 KiB | 00m00s [110/144] openssh-0:10.0p1-7.fc44.x86_6 100% | 27.6 MiB/s | 338.9 KiB | 00m00s [111/144] perl-Pod-Perldoc-0:3.28.01-52 100% | 7.5 MiB/s | 84.3 KiB | 00m00s [112/144] perl-podlators-1:6.0.2-520.fc 100% | 13.9 MiB/s | 128.4 KiB | 00m00s [113/144] libcbor-0:0.12.0-6.fc43.x86_6 100% | 5.5 MiB/s | 33.5 KiB | 00m00s [114/144] rocm-llvm-0:20-2.rocm7.0.0.fc 100% | 46.4 MiB/s | 13.5 MiB | 00m00s [115/144] groff-base-0:1.23.0-11.fc44.x 100% | 42.3 MiB/s | 1.1 MiB | 00m00s [116/144] perl-File-Temp-1:0.231.200-1. 100% | 5.3 MiB/s | 59.5 KiB | 00m00s [117/144] perl-HTTP-Tiny-0:0.090-521.fc 100% | 5.5 MiB/s | 56.3 KiB | 00m00s [118/144] perl-Pod-Simple-1:3.47-3.fc43 100% | 30.7 MiB/s | 219.9 KiB | 00m00s [119/144] perl-Term-ANSIColor-0:5.01-52 100% | 6.6 MiB/s | 47.6 KiB | 00m00s [120/144] perl-Term-Cap-0:1.18-520.fc43 100% | 7.1 MiB/s | 21.9 KiB | 00m00s [121/144] perl-File-Path-0:2.18-520.fc4 100% | 6.8 MiB/s | 35.1 KiB | 00m00s [122/144] perl-IO-Socket-SSL-0:2.095-2. 100% | 32.3 MiB/s | 231.5 KiB | 00m00s [123/144] perl-Net-SSLeay-0:1.94-11.fc4 100% | 61.0 MiB/s | 374.8 KiB | 00m00s [124/144] perl-Text-Tabs+Wrap-0:2024.00 100% | 4.2 MiB/s | 21.6 KiB | 00m00s [125/144] perl-Pod-Escapes-1:1.07-520.f 100% | 2.8 MiB/s | 19.8 KiB | 00m00s [126/144] ncurses-0:6.5-7.20250614.fc43 100% | 69.4 MiB/s | 426.2 KiB | 00m00s [127/144] perl-AutoLoader-0:5.74-520.fc 100% | 3.5 MiB/s | 21.2 KiB | 00m00s [128/144] python3-0:3.14.0~rc3-1.fc44.x 100% | 6.7 MiB/s | 27.6 KiB | 00m00s [129/144] mpdecimal-0:4.0.1-2.fc43.x86_ 100% | 13.5 MiB/s | 97.1 KiB | 00m00s [130/144] python-pip-wheel-0:25.2-4.fc4 100% | 113.4 MiB/s | 1.1 MiB | 00m00s [131/144] tzdata-0:2025b-3.fc43.noarch 100% | 87.1 MiB/s | 713.9 KiB | 00m00s [132/144] zlib-ng-compat-devel-0:2.2.5- 100% | 6.2 MiB/s | 38.3 KiB | 00m00s [133/144] python3-libs-0:3.14.0~rc3-1.f 100% | 181.9 MiB/s | 9.8 MiB | 00m00s [134/144] perl-Encode-4:3.21-520.fc43.x 100% | 55.4 MiB/s | 1.1 MiB | 00m00s [135/144] perl-Storable-1:3.37-521.fc43 100% | 12.0 MiB/s | 98.5 KiB | 00m00s [136/144] glibc-devel-0:2.42.9000-5.fc4 100% | 56.1 MiB/s | 575.0 KiB | 00m00s [137/144] libxcrypt-devel-0:4.4.38-9.fc 100% | 4.8 MiB/s | 29.2 KiB | 00m00s [138/144] libstdc++-devel-0:15.2.1-2.fc 100% | 135.7 MiB/s | 5.3 MiB | 00m00s [139/144] kernel-headers-0:6.17.0-63.fc 100% | 89.3 MiB/s | 1.7 MiB | 00m00s [140/144] gcc-plugin-annobin-0:15.2.1-2 100% | 11.1 MiB/s | 57.1 KiB | 00m00s [141/144] annobin-plugin-gcc-0:12.99-1. 100% | 88.4 MiB/s | 996.0 KiB | 00m00s [142/144] annobin-docs-0:12.99-1.fc43.n 100% | 14.6 MiB/s | 89.5 KiB | 00m00s [143/144] cmake-rpm-macros-0:3.31.6-4.f 100% | 2.1 MiB/s | 14.8 KiB | 00m00s [144/144] rocsolver-0:7.0.1-1.fc44.x86_ 100% | 148.8 MiB/s | 788.5 MiB | 00m05s -------------------------------------------------------------------------------- [144/144] Total 100% | 266.9 MiB/s | 1.6 GiB | 00m06s Running transaction [ 1/146] Verify package files 100% | 24.0 B/s | 144.0 B | 00m06s [ 2/146] Prepare transaction 100% | 953.0 B/s | 144.0 B | 00m00s [ 3/146] Installing cmake-filesystem-0 100% | 3.7 MiB/s | 7.6 KiB | 00m00s [ 4/146] Installing libmpc-0:1.3.1-8.f 100% | 158.3 MiB/s | 162.1 KiB | 00m00s [ 5/146] Installing expat-0:2.7.2-1.fc 100% | 21.0 MiB/s | 300.7 KiB | 00m00s [ 6/146] Installing rocm-llvm-filesyst 100% | 6.2 MiB/s | 19.1 KiB | 00m00s [ 7/146] Installing rocm-libc++-0:20-2 100% | 46.0 MiB/s | 1.3 MiB | 00m00s [ 8/146] Installing rocm-llvm-libs-0:2 100% | 70.6 MiB/s | 91.6 MiB | 00m01s [ 9/146] Installing rocm-clang-libs-0: 100% | 69.7 MiB/s | 94.1 MiB | 00m01s [ 10/146] Installing numactl-libs-0:2.0 100% | 56.4 MiB/s | 57.8 KiB | 00m00s [ 11/146] Installing make-1:4.4.1-11.fc 100% | 90.0 MiB/s | 1.8 MiB | 00m00s [ 12/146] Installing rocm-comgr-0:20-2. 100% | 67.8 MiB/s | 126.3 MiB | 00m02s [ 13/146] Installing go-filesystem-0:3. 100% | 0.0 B/s | 392.0 B | 00m00s [ 14/146] Installing rocm-lld-0:20-2.ro 100% | 62.6 MiB/s | 5.9 MiB | 00m00s [ 15/146] Installing rocm-libc++-devel- 100% | 106.6 MiB/s | 15.4 MiB | 00m00s [ 16/146] Installing cpp-0:15.2.1-2.fc4 100% | 321.6 MiB/s | 38.0 MiB | 00m00s [ 17/146] Installing hipblas-common-dev 100% | 17.8 MiB/s | 18.2 KiB | 00m00s [ 18/146] Installing zlib-ng-compat-dev 100% | 106.0 MiB/s | 108.5 KiB | 00m00s [ 19/146] Installing annobin-docs-0:12. 100% | 32.6 MiB/s | 100.1 KiB | 00m00s [ 20/146] Installing kernel-headers-0:6 100% | 202.2 MiB/s | 6.9 MiB | 00m00s [ 21/146] Installing glibc-devel-0:2.42 100% | 168.3 MiB/s | 2.4 MiB | 00m00s [ 22/146] Installing libxcrypt-devel-0: 100% | 16.2 MiB/s | 33.1 KiB | 00m00s [ 23/146] Installing gcc-0:15.2.1-2.fc4 100% | 375.5 MiB/s | 111.9 MiB | 00m00s [ 24/146] Installing libstdc++-devel-0: 100% | 416.5 MiB/s | 37.5 MiB | 00m00s [ 25/146] Installing tzdata-0:2025b-3.f 100% | 61.0 MiB/s | 1.9 MiB | 00m00s [ 26/146] Installing python-pip-wheel-0 100% | 589.9 MiB/s | 1.2 MiB | 00m00s [ 27/146] Installing mpdecimal-0:4.0.1- 100% | 30.5 MiB/s | 218.8 KiB | 00m00s [ 28/146] Installing python3-libs-0:3.1 100% | 318.7 MiB/s | 43.3 MiB | 00m00s [ 29/146] Installing python3-0:3.14.0~r 100% | 2.1 MiB/s | 30.7 KiB | 00m00s [ 30/146] Installing cmake-rpm-macros-0 100% | 8.1 MiB/s | 8.3 KiB | 00m00s [ 31/146] Installing python3-boolean.py 100% | 210.4 MiB/s | 646.4 KiB | 00m00s [ 32/146] Installing python3-license-ex 100% | 390.9 MiB/s | 1.2 MiB | 00m00s [ 33/146] Installing rocm-llvm-0:20-2.r 100% | 65.9 MiB/s | 52.5 MiB | 00m01s [ 34/146] Installing rocm-llvm-devel-0: 100% | 90.1 MiB/s | 28.7 MiB | 00m00s [ 35/146] Installing rocm-llvm-static-0 100% | 90.3 MiB/s | 1.9 GiB | 00m22s [ 36/146] Installing ncurses-0:6.5-7.20 100% | 26.2 MiB/s | 616.4 KiB | 00m00s [ 37/146] Installing groff-base-0:1.23. 100% | 109.9 MiB/s | 3.8 MiB | 00m00s [ 38/146] Installing perl-Digest-0:1.20 100% | 36.2 MiB/s | 37.1 KiB | 00m00s [ 39/146] Installing perl-FileHandle-0: 100% | 0.0 B/s | 9.8 KiB | 00m00s [ 40/146] Installing perl-Digest-MD5-0: 100% | 60.1 MiB/s | 61.6 KiB | 00m00s [ 41/146] Installing perl-B-0:1.89-520. 100% | 246.4 MiB/s | 504.7 KiB | 00m00s [ 42/146] Installing perl-libnet-0:3.15 100% | 143.9 MiB/s | 294.7 KiB | 00m00s [ 43/146] Installing perl-Data-Dumper-0 100% | 114.8 MiB/s | 117.5 KiB | 00m00s [ 44/146] Installing perl-MIME-Base32-0 100% | 0.0 B/s | 32.2 KiB | 00m00s [ 45/146] Installing perl-AutoLoader-0: 100% | 0.0 B/s | 21.0 KiB | 00m00s [ 46/146] Installing perl-IO-Socket-IP- 100% | 99.8 MiB/s | 102.2 KiB | 00m00s [ 47/146] Installing perl-URI-0:5.34-1. 100% | 91.7 MiB/s | 281.8 KiB | 00m00s [ 48/146] Installing perl-Net-SSLeay-0: 100% | 271.7 MiB/s | 1.4 MiB | 00m00s [ 49/146] Installing perl-IO-Socket-SSL 100% | 350.9 MiB/s | 718.6 KiB | 00m00s [ 50/146] Installing perl-Text-Tabs+Wra 100% | 23.3 MiB/s | 23.9 KiB | 00m00s [ 51/146] Installing perl-Pod-Escapes-1 100% | 0.0 B/s | 25.9 KiB | 00m00s [ 52/146] Installing perl-File-Path-0:2 100% | 63.0 MiB/s | 64.5 KiB | 00m00s [ 53/146] Installing perl-locale-0:1.13 100% | 0.0 B/s | 6.5 KiB | 00m00s [ 54/146] Installing perl-if-0:0.61.000 100% | 0.0 B/s | 6.2 KiB | 00m00s [ 55/146] Installing perl-Time-Local-2: 100% | 68.9 MiB/s | 70.6 KiB | 00m00s [ 56/146] Installing perl-Pod-Simple-1: 100% | 187.1 MiB/s | 574.9 KiB | 00m00s [ 57/146] Installing perl-HTTP-Tiny-0:0 100% | 152.8 MiB/s | 156.4 KiB | 00m00s [ 58/146] Installing perl-File-Temp-1:0 100% | 161.6 MiB/s | 165.5 KiB | 00m00s [ 59/146] Installing perl-Term-Cap-0:1. 100% | 29.9 MiB/s | 30.6 KiB | 00m00s [ 60/146] Installing perl-Term-ANSIColo 100% | 96.9 MiB/s | 99.2 KiB | 00m00s [ 61/146] Installing perl-Class-Struct- 100% | 0.0 B/s | 25.9 KiB | 00m00s [ 62/146] Installing perl-IPC-Open3-0:1 100% | 0.0 B/s | 28.5 KiB | 00m00s [ 63/146] Installing perl-POSIX-0:2.23- 100% | 227.2 MiB/s | 232.6 KiB | 00m00s [ 64/146] Installing perl-podlators-1:6 100% | 20.9 MiB/s | 321.4 KiB | 00m00s [ 65/146] Installing perl-Pod-Perldoc-0 100% | 11.8 MiB/s | 169.2 KiB | 00m00s [ 66/146] Installing perl-File-stat-0:1 100% | 12.8 MiB/s | 13.1 KiB | 00m00s [ 67/146] Installing perl-SelectSaver-0 100% | 0.0 B/s | 2.6 KiB | 00m00s [ 68/146] Installing perl-Symbol-0:1.09 100% | 0.0 B/s | 7.3 KiB | 00m00s [ 69/146] Installing perl-Socket-4:2.04 100% | 119.4 MiB/s | 122.3 KiB | 00m00s [ 70/146] Installing perl-Pod-Usage-4:2 100% | 6.1 MiB/s | 87.9 KiB | 00m00s [ 71/146] Installing perl-Text-ParseWor 100% | 14.2 MiB/s | 14.6 KiB | 00m00s [ 72/146] Installing perl-IO-0:1.55-520 100% | 148.1 MiB/s | 151.7 KiB | 00m00s [ 73/146] Installing perl-Fcntl-0:1.20- 100% | 48.7 MiB/s | 49.9 KiB | 00m00s [ 74/146] Installing perl-overloading-0 100% | 0.0 B/s | 5.6 KiB | 00m00s [ 75/146] Installing perl-mro-0:1.29-52 100% | 0.0 B/s | 42.7 KiB | 00m00s [ 76/146] Installing perl-base-0:2.27-5 100% | 0.0 B/s | 13.0 KiB | 00m00s [ 77/146] Installing perl-File-Basename 100% | 0.0 B/s | 14.6 KiB | 00m00s [ 78/146] Installing perl-Getopt-Long-1 100% | 143.8 MiB/s | 147.2 KiB | 00m00s [ 79/146] Installing perl-Storable-1:3. 100% | 227.4 MiB/s | 232.8 KiB | 00m00s [ 80/146] Installing perl-vars-0:1.05-5 100% | 0.0 B/s | 4.3 KiB | 00m00s [ 81/146] Installing perl-overload-0:1. 100% | 0.0 B/s | 72.0 KiB | 00m00s [ 82/146] Installing perl-parent-1:0.24 100% | 0.0 B/s | 11.0 KiB | 00m00s [ 83/146] Installing perl-MIME-Base64-0 100% | 43.2 MiB/s | 44.3 KiB | 00m00s [ 84/146] Installing perl-Errno-0:1.38- 100% | 0.0 B/s | 8.8 KiB | 00m00s [ 85/146] Installing perl-constant-0:1. 100% | 0.0 B/s | 27.4 KiB | 00m00s [ 86/146] Installing perl-Scalar-List-U 100% | 145.2 MiB/s | 148.7 KiB | 00m00s [ 87/146] Installing perl-Getopt-Std-0: 100% | 11.5 MiB/s | 11.8 KiB | 00m00s [ 88/146] Installing perl-Encode-4:3.21 100% | 180.5 MiB/s | 4.7 MiB | 00m00s [ 89/146] Installing perl-DynaLoader-0: 100% | 31.7 MiB/s | 32.5 KiB | 00m00s [ 90/146] Installing perl-PathTools-0:3 100% | 90.1 MiB/s | 184.6 KiB | 00m00s [ 91/146] Installing perl-Exporter-0:5. 100% | 54.3 MiB/s | 55.6 KiB | 00m00s [ 92/146] Installing perl-Carp-0:1.54-5 100% | 23.3 MiB/s | 47.7 KiB | 00m00s [ 93/146] Installing perl-libs-4:5.42.0 100% | 270.9 MiB/s | 11.6 MiB | 00m00s [ 94/146] Installing perl-interpreter-4 100% | 8.4 MiB/s | 120.3 KiB | 00m00s [ 95/146] Installing perl-File-Copy-0:2 100% | 19.7 MiB/s | 20.2 KiB | 00m00s [ 96/146] Installing perl-File-Which-0: 100% | 0.0 B/s | 31.4 KiB | 00m00s [ 97/146] Installing perl-TermReadKey-0 100% | 64.6 MiB/s | 66.2 KiB | 00m00s [ 98/146] Installing perl-lib-0:0.65-52 100% | 0.0 B/s | 8.9 KiB | 00m00s [ 99/146] Installing perl-Error-1:0.170 100% | 78.1 MiB/s | 80.0 KiB | 00m00s [100/146] Installing libcbor-0:0.12.0-6 100% | 77.3 MiB/s | 79.2 KiB | 00m00s [101/146] Installing libfido2-0:1.16.0- 100% | 234.4 MiB/s | 240.0 KiB | 00m00s [102/146] Installing openssh-0:10.0p1-7 100% | 87.0 MiB/s | 1.4 MiB | 00m00s [103/146] Installing libedit-0:3.1-56.2 100% | 236.1 MiB/s | 241.8 KiB | 00m00s [104/146] Installing openssh-clients-0: 100% | 100.5 MiB/s | 2.6 MiB | 00m00s [105/146] Installing less-0:679-4.fc44. 100% | 25.1 MiB/s | 410.5 KiB | 00m00s [106/146] Installing git-core-0:2.51.0- 100% | 328.6 MiB/s | 23.7 MiB | 00m00s [107/146] Installing git-core-doc-0:2.5 100% | 344.1 MiB/s | 17.9 MiB | 00m00s [108/146] Installing git-0:2.51.0-2.fc4 100% | 56.4 MiB/s | 57.7 KiB | 00m00s [109/146] Installing perl-Git-0:2.51.0- 100% | 0.0 B/s | 65.4 KiB | 00m00s [110/146] Installing hwdata-0:0.399-1.f 100% | 456.9 MiB/s | 9.6 MiB | 00m00s [111/146] Installing libpciaccess-0:0.1 100% | 44.8 MiB/s | 45.9 KiB | 00m00s [112/146] Installing libdrm-0:2.4.125-3 100% | 195.1 MiB/s | 399.7 KiB | 00m00s [113/146] Installing rocm-runtime-0:7.0 100% | 459.9 MiB/s | 3.2 MiB | 00m00s [114/146] Installing rocm-runtime-devel 100% | 222.3 MiB/s | 682.8 KiB | 00m00s [115/146] Installing rocm-clang-runtime 100% | 128.5 MiB/s | 8.5 MiB | 00m00s [116/146] Installing rocm-clang-0:20-2. 100% | 72.3 MiB/s | 68.5 MiB | 00m01s [117/146] Installing rocm-clang-devel-0 100% | 117.7 MiB/s | 26.3 MiB | 00m00s [118/146] Installing rocm-device-libs-0 100% | 88.0 MiB/s | 3.3 MiB | 00m00s [119/146] Installing rocm-comgr-devel-0 100% | 49.7 MiB/s | 101.9 KiB | 00m00s [120/146] Installing hipcc-0:20-2.rocm7 100% | 29.6 MiB/s | 635.9 KiB | 00m00s [121/146] Installing rocm-hip-0:7.0.1-1 100% | 310.2 MiB/s | 26.7 MiB | 00m00s [122/146] Installing rocblas-0:7.0.1-1. 100% | 80.8 MiB/s | 946.7 MiB | 00m12s [123/146] Installing rocsolver-0:7.0.1- 100% | 31.3 MiB/s | 883.3 MiB | 00m28s [124/146] Installing hipblas-0:7.0.1-1. 100% | 27.0 MiB/s | 800.7 KiB | 00m00s [125/146] Installing rocm-hip-devel-0:7 100% | 49.0 MiB/s | 3.0 MiB | 00m00s [126/146] Installing vim-filesystem-2:9 100% | 674.1 KiB/s | 4.7 KiB | 00m00s [127/146] Installing emacs-filesystem-1 100% | 26.6 KiB/s | 544.0 B | 00m00s [128/146] Installing golang-src-0:1.25. 100% | 115.9 MiB/s | 82.4 MiB | 00m01s [129/146] Installing golang-0:1.25.1-1. 100% | 282.4 MiB/s | 9.6 MiB | 00m00s [130/146] Installing golang-bin-0:1.25. 100% | 174.2 MiB/s | 67.2 MiB | 00m00s [131/146] Installing rhash-0:1.4.5-3.fc 100% | 12.4 MiB/s | 356.4 KiB | 00m00s [132/146] Installing libuv-1:1.51.0-2.f 100% | 37.3 MiB/s | 573.0 KiB | 00m00s [133/146] Installing jsoncpp-0:1.9.6-2. 100% | 18.1 MiB/s | 259.2 KiB | 00m00s [134/146] Installing cmake-0:3.31.6-4.f 100% | 280.5 MiB/s | 34.5 MiB | 00m00s [135/146] Installing cmake-data-0:3.31. 100% | 105.4 MiB/s | 9.1 MiB | 00m00s [136/146] Installing kmod-0:34.2-3.fc44 100% | 13.0 MiB/s | 253.1 KiB | 00m00s [137/146] Installing golist-0:0.10.4-8. 100% | 154.1 MiB/s | 4.5 MiB | 00m00s [138/146] Installing go-rpm-macros-0:3. 100% | 6.5 MiB/s | 99.5 KiB | 00m00s [139/146] Installing rocminfo-0:7.0.1-1 100% | 5.1 MiB/s | 79.0 KiB | 00m00s [140/146] Installing rocblas-devel-0:7. 100% | 160.2 MiB/s | 2.7 MiB | 00m00s [141/146] Installing hipblas-devel-0:7. 100% | 150.4 MiB/s | 2.4 MiB | 00m00s [142/146] Installing go-vendor-tools-0: 100% | 19.2 MiB/s | 393.9 KiB | 00m00s [143/146] Installing gcc-c++-0:15.2.1-2 100% | 299.8 MiB/s | 41.4 MiB | 00m00s [144/146] Installing annobin-plugin-gcc 100% | 54.8 MiB/s | 1.0 MiB | 00m00s [145/146] Installing gcc-plugin-annobin 100% | 2.9 MiB/s | 58.6 KiB | 00m00s [146/146] Installing systemd-rpm-macros 100% | 15.6 KiB/s | 8.9 KiB | 00m01s Complete! Finish: build setup for ollama-0.12.3-1.fc44.src.rpm Start: rpmbuild ollama-0.12.3-1.fc44.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%mkbuilddir): /bin/sh -e /var/tmp/rpm-tmp.TJihhz Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.TVVbVi + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd /builddir/build/BUILD/ollama-0.12.3-build + rm -rf ollama-0.12.3 + /usr/lib/rpm/rpmuncompress -x /builddir/build/SOURCES/ollama-0.12.3.tar.gz + STATUS=0 + '[' 0 -ne 0 ']' + cd ollama-0.12.3 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + rm -fr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/vendor + [[ ! -e /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin ]] + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin' + export GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + [[ ! -e /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama ]] ++ dirname /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama' + ln -fs /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + /usr/lib/rpm/rpmuncompress -x /builddir/build/SOURCES/vendor.tar.bz2 + STATUS=0 + '[' 0 -ne 0 ']' + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/remove-runtime-for-cuda-and-rocm.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/replace-library-paths.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/vendor-pdevine-tensor-fix-cannonical-import-paths.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + cp /builddir/build/SOURCES/LICENSE.sentencepiece convert/sentencepiece/LICENSE + RPM_EC=0 ++ jobs -p + exit 0 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.vFuAF0 + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc44.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: Additional repo https_developer_downlo 100% | 14.9 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 14.9 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 5.7 KiB/s | 1.5 KiB | 00m00s fedora 100% | 86.7 KiB/s | 29.1 KiB | 00m00s Repositories loaded. Package "cmake-3.31.6-4.fc43.x86_64" is already installed. Package "gcc-c++-15.2.1-2.fc44.x86_64" is already installed. Package "go-rpm-macros-3.8.0-1.fc44.x86_64" is already installed. Package "go-vendor-tools-0.9.0-1.fc44.noarch" is already installed. Package "hipblas-devel-7.0.1-1.fc44.x86_64" is already installed. Package "rocblas-devel-7.0.1-1.fc44.x86_64" is already installed. Package "rocm-comgr-devel-20-2.rocm7.0.0.fc44.x86_64" is already installed. Package "rocm-hip-devel-7.0.1-1.fc44.x86_64" is already installed. Package "rocm-runtime-devel-7.0.1-1.fc44.x86_64" is already installed. Package "rocminfo-7.0.1-1.fc44.x86_64" is already installed. Package "systemd-rpm-macros-258-1.fc44.noarch" is already installed. Package Arch Version Repository Size Installing: askalono-cli x86_64 0.5.0-3.fc43 fedora 4.6 MiB Transaction Summary: Installing: 1 package Total size of inbound packages is 2 MiB. Need to download 2 MiB. After this operation, 5 MiB extra will be used (install 5 MiB, remove 0 B). [1/1] askalono-cli-0:0.5.0-3.fc43.x86_6 100% | 77.9 MiB/s | 2.4 MiB | 00m00s -------------------------------------------------------------------------------- [1/1] Total 100% | 71.0 MiB/s | 2.4 MiB | 00m00s Running transaction [1/3] Verify package files 100% | 111.0 B/s | 1.0 B | 00m00s [2/3] Prepare transaction 100% | 34.0 B/s | 1.0 B | 00m00s [3/3] Installing askalono-cli-0:0.5.0-3 100% | 118.6 MiB/s | 4.6 MiB | 00m00s Complete! Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.5PsaeC + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc44.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: Additional repo https_developer_downlo 100% | 21.2 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 21.2 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 8.1 KiB/s | 1.5 KiB | 00m00s fedora 100% | 118.4 KiB/s | 29.1 KiB | 00m00s Repositories loaded. Package "askalono-cli-0.5.0-3.fc43.x86_64" is already installed. Package "cmake-3.31.6-4.fc43.x86_64" is already installed. Package "gcc-c++-15.2.1-2.fc44.x86_64" is already installed. Package "go-rpm-macros-3.8.0-1.fc44.x86_64" is already installed. Package "go-vendor-tools-0.9.0-1.fc44.noarch" is already installed. Package "hipblas-devel-7.0.1-1.fc44.x86_64" is already installed. Package "rocblas-devel-7.0.1-1.fc44.x86_64" is already installed. Package "rocm-comgr-devel-20-2.rocm7.0.0.fc44.x86_64" is already installed. Package "rocm-hip-devel-7.0.1-1.fc44.x86_64" is already installed. Package "rocm-runtime-devel-7.0.1-1.fc44.x86_64" is already installed. Package "rocminfo-7.0.1-1.fc44.x86_64" is already installed. Package "systemd-rpm-macros-258-1.fc44.noarch" is already installed. Nothing to do. Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.V1v0MG + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.UGvUPb + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + export 'GO_LDFLAGS= -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release' + GO_LDFLAGS=' -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release' ++ echo ollama-0.12.3-1.fc44-1759536000 ++ sha1sum ++ cut -d ' ' -f1 + GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + GO111MODULE=on + go build -buildmode pie -compiler gc '-tags=rpm_crashtraceback ' -a -v -ldflags ' -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release -X github.com/ollama/ollama/version=0.12.3 -B 0x060667f78f863f0e57b023378a3383660a76c3d1 -compressdwarf=false -linkmode=external -extldflags '\''-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes '\''' -o /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama github.com/ollama/ollama internal/goarch internal/byteorder internal/unsafeheader internal/cpu internal/coverage/rtcov internal/abi internal/godebugs internal/goexperiment internal/goos internal/profilerecord internal/bytealg internal/chacha8rand internal/runtime/atomic internal/runtime/math internal/runtime/syscall internal/runtime/strconv internal/runtime/exithook internal/runtime/gc internal/asan internal/msan internal/runtime/cgroup internal/runtime/sys internal/stringslite internal/trace/tracev2 sync/atomic math/bits internal/itoa cmp internal/synctest internal/race unicode unicode/utf8 internal/runtime/maps internal/sync crypto/internal/fips140deps/byteorder math crypto/internal/fips140deps/cpu crypto/internal/fips140/alias crypto/internal/fips140/subtle crypto/internal/boring/sig encoding unicode/utf16 github.com/rivo/uniseg internal/nettrace runtime vendor/golang.org/x/crypto/cryptobyte/asn1 golang.org/x/crypto/internal/alias log/internal log/slog/internal github.com/ollama/ollama/version container/list vendor/golang.org/x/crypto/internal/alias golang.org/x/text/encoding/internal/identifier golang.org/x/text/internal/utf8internal github.com/ollama/ollama/fs image/color golang.org/x/image/math/f64 github.com/gin-gonic/gin/internal/bytesconv golang.org/x/net/html/atom github.com/go-playground/locales/currency github.com/leodido/go-urn/scim/schema github.com/pelletier/go-toml/v2/internal/characters google.golang.org/protobuf/internal/flags github.com/d4l3k/go-bfloat16 google.golang.org/protobuf/internal/set github.com/apache/arrow/go/arrow/internal/debug golang.org/x/xerrors/internal gorgonia.org/vecf64 math/cmplx github.com/chewxy/math32 gonum.org/v1/gonum/blas gonum.org/v1/gonum/internal/asm/c128 gorgonia.org/vecf32 gonum.org/v1/gonum/internal/math32 gonum.org/v1/gonum/lapack gonum.org/v1/gonum/internal/asm/f64 gonum.org/v1/gonum/internal/cmplx64 gonum.org/v1/gonum/internal/asm/f32 gonum.org/v1/gonum/internal/asm/c64 gonum.org/v1/gonum/mathext/internal/amos gonum.org/v1/gonum/mathext/internal/gonum gonum.org/v1/gonum/mathext/internal/cephes github.com/ollama/ollama/server/internal/internal/stringsx github.com/agnivade/levenshtein gonum.org/v1/gonum/mathext internal/reflectlite iter sync weak slices crypto/subtle maps errors sort internal/oserror strconv path internal/bisect syscall internal/godebug io bytes strings hash crypto crypto/internal/fips140deps/godebug internal/testlog math/rand/v2 crypto/internal/fips140cache reflect bufio crypto/internal/fips140 crypto/internal/impl crypto/internal/fips140/sha256 crypto/internal/fips140/sha3 time crypto/internal/fips140/sha512 internal/syscall/unix crypto/internal/fips140/hmac internal/syscall/execenv crypto/internal/fips140/check crypto/internal/randutil math/rand crypto/internal/fips140/aes crypto/internal/fips140/edwards25519/field encoding/base64 encoding/pem crypto/internal/fips140/edwards25519 context io/fs internal/poll regexp/syntax internal/filepathlite vendor/golang.org/x/net/dns/dnsmessage regexp internal/singleflight os unique net/netip runtime/cgo internal/fmtsort encoding/binary golang.org/x/sys/unix crypto/internal/sysrand crypto/internal/entropy fmt crypto/internal/fips140/drbg crypto/internal/fips140/ed25519 crypto/internal/fips140only crypto/internal/fips140/aes/gcm crypto/cipher math/big crypto/internal/boring encoding/json github.com/containerd/console github.com/mattn/go-runewidth encoding/csv github.com/olekukonko/tablewriter crypto/md5 crypto/sha1 crypto/rand database/sql/driver crypto/ed25519 encoding/hex crypto/aes crypto/des crypto/dsa crypto/internal/fips140/nistec/fiat crypto/internal/boring/bbig crypto/internal/fips140/bigmod crypto/sha3 crypto/internal/fips140hash crypto/sha512 encoding/asn1 crypto/hmac crypto/rc4 crypto/internal/fips140/rsa crypto/rsa vendor/golang.org/x/crypto/cryptobyte crypto/internal/fips140/nistec crypto/sha256 crypto/x509/pkix net/url path/filepath golang.org/x/crypto/chacha20 golang.org/x/crypto/internal/poly1305 golang.org/x/crypto/blowfish log golang.org/x/crypto/ssh/internal/bcrypt_pbkdf log/slog/internal/buffer github.com/ollama/ollama/format log/slog compress/flate net crypto/internal/fips140/ecdh crypto/elliptic crypto/ecdh crypto/internal/fips140/ecdsa github.com/ollama/ollama/types/model golang.org/x/crypto/curve25519 hash/crc32 crypto/ecdsa crypto/internal/fips140/hkdf crypto/hkdf compress/gzip crypto/internal/fips140/mlkem crypto/internal/fips140/tls12 crypto/internal/fips140/tls13 vendor/golang.org/x/crypto/chacha20 vendor/golang.org/x/crypto/internal/poly1305 vendor/golang.org/x/sys/cpu crypto/fips140 crypto/tls/internal/fips140tls vendor/golang.org/x/text/transform vendor/golang.org/x/crypto/chacha20poly1305 vendor/golang.org/x/text/unicode/bidi vendor/golang.org/x/text/unicode/norm crypto/internal/hpke vendor/golang.org/x/net/http2/hpack vendor/golang.org/x/text/secure/bidirule mime mime/quotedprintable net/http/internal net/http/internal/ascii golang.org/x/sync/errgroup golang.org/x/text/transform vendor/golang.org/x/net/idna golang.org/x/text/encoding golang.org/x/text/runes golang.org/x/text/encoding/internal os/user golang.org/x/text/encoding/unicode golang.org/x/term github.com/emirpasic/gods/v2/utils github.com/emirpasic/gods/v2/containers github.com/emirpasic/gods/v2/lists github.com/emirpasic/gods/v2/lists/arraylist github.com/ollama/ollama/progress github.com/ollama/ollama/readline flag embed github.com/ollama/ollama/llama/llama.cpp/common github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile github.com/google/uuid crypto/x509 golang.org/x/crypto/ssh github.com/ollama/ollama/auth github.com/ollama/ollama/envconfig crypto/tls net/textproto vendor/golang.org/x/net/http/httpguts vendor/golang.org/x/net/http/httpproxy mime/multipart github.com/ollama/ollama/llama/llama.cpp/tools/mtmd net/http/httptrace github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu net/http/internal/httpcommon net/http github.com/ollama/ollama/api github.com/ollama/ollama/parser github.com/ollama/ollama/discover github.com/ollama/ollama/fs/util/bufioutil github.com/ollama/ollama/fs/ggml github.com/ollama/ollama/logutil hash/maphash github.com/ollama/ollama/ml container/heap github.com/dlclark/regexp2/syntax github.com/dlclark/regexp2 github.com/emirpasic/gods/v2/trees github.com/emirpasic/gods/v2/trees/binaryheap github.com/ollama/ollama/model/input github.com/ollama/ollama/kvcache github.com/ollama/ollama/ml/nn/rope github.com/ollama/ollama/ml/nn/pooling image golang.org/x/image/bmp hash/adler32 compress/zlib golang.org/x/image/ccitt golang.org/x/image/tiff/lzw golang.org/x/image/tiff io/ioutil golang.org/x/image/riff golang.org/x/image/vp8 golang.org/x/image/vp8l golang.org/x/image/webp image/internal/imageutil image/jpeg image/png golang.org/x/sync/semaphore os/exec github.com/ollama/ollama/runner/common github.com/ollama/ollama/ml/nn github.com/ollama/ollama/ml/nn/fast image/draw golang.org/x/image/draw github.com/ollama/ollama/model/imageproc encoding/xml github.com/gin-contrib/sse github.com/gin-gonic/gin/internal/json golang.org/x/net/html github.com/gabriel-vasile/mimetype/internal/charset debug/dwarf internal/saferio debug/macho github.com/gabriel-vasile/mimetype/internal/json github.com/gabriel-vasile/mimetype/internal/magic github.com/gabriel-vasile/mimetype github.com/go-playground/locales github.com/go-playground/universal-translator github.com/leodido/go-urn golang.org/x/sys/cpu golang.org/x/crypto/sha3 golang.org/x/text/internal/tag golang.org/x/text/internal/language golang.org/x/text/internal/language/compact golang.org/x/text/language github.com/go-playground/validator/v10 github.com/pelletier/go-toml/v2/internal/danger github.com/pelletier/go-toml/v2/unstable github.com/pelletier/go-toml/v2/internal/tracker github.com/pelletier/go-toml/v2 encoding/gob go/token html text/template/parse text/template html/template net/rpc github.com/ugorji/go/codec hash/fnv google.golang.org/protobuf/internal/detrand google.golang.org/protobuf/internal/errors google.golang.org/protobuf/encoding/protowire google.golang.org/protobuf/internal/pragma google.golang.org/protobuf/reflect/protoreflect google.golang.org/protobuf/internal/encoding/messageset google.golang.org/protobuf/internal/genid google.golang.org/protobuf/internal/order google.golang.org/protobuf/internal/strs google.golang.org/protobuf/reflect/protoregistry google.golang.org/protobuf/runtime/protoiface google.golang.org/protobuf/proto gopkg.in/yaml.v3 github.com/gin-gonic/gin/binding github.com/gin-gonic/gin/render github.com/mattn/go-isatty golang.org/x/text/unicode/bidi golang.org/x/text/secure/bidirule golang.org/x/text/unicode/norm golang.org/x/net/idna golang.org/x/net/http/httpguts golang.org/x/net/http2/hpack golang.org/x/net/internal/httpcommon golang.org/x/net/http2 golang.org/x/net/http2/h2c net/http/httputil github.com/gin-gonic/gin github.com/gin-contrib/cors archive/tar archive/zip golang.org/x/text/encoding/unicode/utf32 github.com/nlpodyssey/gopickle/types github.com/nlpodyssey/gopickle/pickle github.com/nlpodyssey/gopickle/pytorch google.golang.org/protobuf/internal/descfmt google.golang.org/protobuf/internal/descopts google.golang.org/protobuf/internal/editiondefaults google.golang.org/protobuf/internal/encoding/text google.golang.org/protobuf/internal/encoding/defval google.golang.org/protobuf/internal/filedesc google.golang.org/protobuf/encoding/prototext google.golang.org/protobuf/internal/encoding/tag google.golang.org/protobuf/internal/impl google.golang.org/protobuf/internal/filetype google.golang.org/protobuf/internal/version google.golang.org/protobuf/runtime/protoimpl github.com/ollama/ollama/convert/sentencepiece github.com/apache/arrow/go/arrow/endian github.com/apache/arrow/go/arrow/internal/cpu github.com/apache/arrow/go/arrow/memory github.com/apache/arrow/go/arrow/bitutil github.com/apache/arrow/go/arrow/decimal128 github.com/apache/arrow/go/arrow/float16 golang.org/x/xerrors github.com/apache/arrow/go/arrow github.com/apache/arrow/go/arrow/array github.com/apache/arrow/go/arrow/tensor github.com/pkg/errors github.com/xtgo/set github.com/chewxy/hm github.com/google/flatbuffers/go github.com/pdevine/tensor/internal/storage github.com/pdevine/tensor/internal/execution github.com/ollama/ollama/ml/backend/ggml/ggml/src github.com/pdevine/tensor/internal/serialization/fb github.com/gogo/protobuf/proto google.golang.org/protobuf/types/descriptorpb google.golang.org/protobuf/internal/editionssupport google.golang.org/protobuf/types/gofeaturespb google.golang.org/protobuf/reflect/protodesc github.com/golang/protobuf/proto go4.org/unsafe/assume-no-moving-gc gonum.org/v1/gonum/blas/gonum github.com/gogo/protobuf/protoc-gen-gogo/descriptor gonum.org/v1/gonum/floats/scalar gonum.org/v1/gonum/floats github.com/x448/float16 golang.org/x/exp/rand gonum.org/v1/gonum/stat/combin github.com/ollama/ollama/fs/gguf github.com/gogo/protobuf/gogoproto github.com/pdevine/tensor/internal/serialization/pb github.com/ollama/ollama/harmony github.com/ollama/ollama/model/parsers github.com/ollama/ollama/model/renderers github.com/ollama/ollama/openai github.com/ollama/ollama/server/internal/internal/names github.com/ollama/ollama/server/internal/cache/blob runtime/debug gonum.org/v1/gonum/blas/blas64 gonum.org/v1/gonum/blas/cblas128 github.com/ollama/ollama/server/internal/client/ollama gonum.org/v1/gonum/lapack/gonum github.com/ollama/ollama/server/internal/internal/backoff github.com/ollama/ollama/template github.com/ollama/ollama/thinking github.com/ollama/ollama/server/internal/registry github.com/ollama/ollama/tools github.com/ollama/ollama/types/errtypes os/signal github.com/ollama/ollama/types/syncmap github.com/spf13/pflag github.com/spf13/cobra gonum.org/v1/gonum/lapack/lapack64 gonum.org/v1/gonum/mat gonum.org/v1/gonum/stat github.com/pdevine/tensor gonum.org/v1/gonum/stat/distuv github.com/pdevine/tensor/native github.com/ollama/ollama/convert github.com/ollama/ollama/ml/backend/ggml github.com/ollama/ollama/llama/llama.cpp/src github.com/ollama/ollama/ml/backend github.com/ollama/ollama/model github.com/ollama/ollama/model/models/bert github.com/ollama/ollama/model/models/deepseek2 github.com/ollama/ollama/model/models/gemma2 github.com/ollama/ollama/model/models/gemma3 github.com/ollama/ollama/model/models/gemma3n github.com/ollama/ollama/model/models/gptoss github.com/ollama/ollama/model/models/llama github.com/ollama/ollama/model/models/llama4 github.com/ollama/ollama/model/models/mistral3 github.com/ollama/ollama/model/models/mllama github.com/ollama/ollama/model/models/qwen2 github.com/ollama/ollama/model/models/qwen25vl github.com/ollama/ollama/model/models/qwen3 github.com/ollama/ollama/model/models github.com/ollama/ollama/llama github.com/ollama/ollama/llm github.com/ollama/ollama/sample github.com/ollama/ollama/runner/llamarunner github.com/ollama/ollama/runner/ollamarunner github.com/ollama/ollama/server github.com/ollama/ollama/runner github.com/ollama/ollama/cmd github.com/ollama/ollama + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + /usr/bin/cmake -S . -B redhat-linux-build_ggml-cpu -DCMAKE_C_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_CXX_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_Fortran_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_INSTALL_DO_STRIP:BOOL=OFF -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_INSTALL_FULL_SBINDIR:PATH=/usr/bin -DCMAKE_INSTALL_SBINDIR:PATH=bin -DINCLUDE_INSTALL_DIR:PATH=/usr/include -DLIB_INSTALL_DIR:PATH=/usr/lib64 -DSYSCONF_INSTALL_DIR:PATH=/etc -DSHARE_INSTALL_PREFIX:PATH=/usr/share -DLIB_SUFFIX=64 -DBUILD_SHARED_LIBS:BOOL=ON --preset CPU Preset CMake variables: CMAKE_BUILD_TYPE="Release" CMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded" -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - /usr/lib64/rocm/llvm/bin/clang++ -- Configuring done (7.7s) -- Generating done (0.0s) CMake Warning: Manually-specified variables were not used by the project: CMAKE_Fortran_FLAGS_RELEASE CMAKE_INSTALL_DO_STRIP INCLUDE_INSTALL_DIR LIB_SUFFIX SHARE_INSTALL_PREFIX SYSCONF_INSTALL_DIR -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu + /usr/bin/cmake --build redhat-linux-build_ggml-cpu -j4 --verbose --target ggml-cpu Change Dir: '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' Run Build Command(s): /usr/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f Makefile -j4 ggml-cpu /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/gmake -f CMakeFiles/Makefile2 ggml-cpu gmake[1]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/CMakeFiles 99 /usr/bin/gmake -f CMakeFiles/Makefile2 ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/all gmake[2]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/depend cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 2%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF CMakeFiles/ggml-base.dir/ggml.c.o.d -o CMakeFiles/ggml-base.dir/ggml.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5663:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5663 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5656:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5656 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 3%] Built target ggml-cpu-sse42-feats [ 3%] Built target ggml-cpu-alderlake-feats gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o [ 4%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o -MF CMakeFiles/ggml-base.dir/ggml.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp [ 4%] Built target ggml-cpu-x64-feats cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 6%] Built target ggml-cpu-sandybridge-feats [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 7%] Built target ggml-cpu-haswell-feats [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-threading.cpp /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 9%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 9%] Built target ggml-cpu-skylakex-feats [ 9%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -MF CMakeFiles/ggml-base.dir/gguf.cpp.o.d -o CMakeFiles/ggml-base.dir/gguf.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 10%] Built target ggml-cpu-icelake-feats In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:4067:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4067 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-base.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-base.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -Wl,-soname,libggml-base.so -o ../../../../../lib/ollama/libggml-base.so "CMakeFiles/ggml-base.dir/ggml.c.o" "CMakeFiles/ggml-base.dir/ggml.cpp.o" "CMakeFiles/ggml-base.dir/ggml-alloc.c.o" "CMakeFiles/ggml-base.dir/ggml-backend.cpp.o" "CMakeFiles/ggml-base.dir/ggml-opt.cpp.o" "CMakeFiles/ggml-base.dir/ggml-threading.cpp.o" "CMakeFiles/ggml-base.dir/ggml-quants.c.o" "CMakeFiles/ggml-base.dir/gguf.cpp.o" -lm gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 11%] Built target ggml-base /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 12%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o [ 13%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c [ 14%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c [ 15%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 19%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o [ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 24%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 24%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 25%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 25%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 26%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 27%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 32%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 35%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 36%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 39%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 41%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 42%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 44%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 45%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 45%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 46%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 47%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 51%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 52%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 53%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c [ 54%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 55%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 56%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-x64.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-x64.dir/link.txt --verbose=1 In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 57%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sse42.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-sse42.dir/link.txt --verbose=1 [ 58%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 59%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp [ 61%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 62%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sandybridge.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-sandybridge.dir/link.txt --verbose=1 In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 62%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-alderlake.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-alderlake.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-x64.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-x64.so "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 62%] Built target ggml-cpu-x64 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-sse42.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-sse42.so "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 62%] Built target ggml-cpu-sse42 /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 63%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 64%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 64%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-sandybridge.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-sandybridge.so "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 64%] Built target ggml-cpu-sandybridge In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 65%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 66%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 67%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 69%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-alderlake.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-alderlake.so "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 69%] Built target ggml-cpu-alderlake [ 70%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 71%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 73%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 73%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 74%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 75%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 77%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 78%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 80%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 81%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 84%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 86%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 87%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 88%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 92%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 94%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 94%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-haswell.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-haswell.dir/link.txt --verbose=1 In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 95%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 96%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp [ 97%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 98%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-skylakex.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-skylakex.dir/link.txt --verbose=1 [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-icelake.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-icelake.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-haswell.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-haswell.so "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-haswell /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-skylakex.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-skylakex.so "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-skylakex /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-icelake.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-icelake.so "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-icelake /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Nothing to be done for 'ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build'. gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu gmake[2]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/CMakeFiles 0 gmake[1]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + /usr/bin/cmake -S . -B redhat-linux-build_ggml-rocm-6 -DCMAKE_C_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_CXX_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_Fortran_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_INSTALL_DO_STRIP:BOOL=OFF -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_INSTALL_FULL_SBINDIR:PATH=/usr/bin -DCMAKE_INSTALL_SBINDIR:PATH=bin -DINCLUDE_INSTALL_DIR:PATH=/usr/include -DLIB_INSTALL_DIR:PATH=/usr/lib64 -DSYSCONF_INSTALL_DIR:PATH=/etc -DSHARE_INSTALL_PREFIX:PATH=/usr/share -DLIB_SUFFIX=64 -DBUILD_SHARED_LIBS:BOOL=ON --preset 'ROCm 6' Preset CMake variables: AMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1151;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-" CMAKE_BUILD_TYPE="Release" CMAKE_HIP_FLAGS="-parallel-jobs=4" CMAKE_HIP_PLATFORM="amd" CMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded" -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - /usr/lib64/rocm/llvm/bin/clang++ CMake Warning (dev) at /usr/lib64/cmake/hip/hip-config-amd.cmake:70 (message): AMDGPU_TARGETS is deprecated. Please use GPU_TARGETS instead. Call Stack (most recent call first): /usr/lib64/cmake/hip/hip-config.cmake:148 (include) CMakeLists.txt:97 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. -- The HIP compiler identification is Clang 20.0.0 -- Detecting HIP compiler ABI info -- Detecting HIP compiler ABI info - done -- Check for working HIP compiler: /usr/lib64/rocm/llvm/bin/clang++ - skipped -- Detecting HIP compile features -- Detecting HIP compile features - done -- HIP and hipBLAS found -- Configuring done (3.8s) -- Generating done (0.0s) CMake Warning: Manually-specified variables were not used by the project: CMAKE_Fortran_FLAGS_RELEASE CMAKE_INSTALL_DO_STRIP INCLUDE_INSTALL_DIR LIB_SUFFIX SHARE_INSTALL_PREFIX SYSCONF_INSTALL_DIR -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 + /usr/bin/cmake --build redhat-linux-build_ggml-rocm-6 -j4 --verbose --target ggml-hip Change Dir: '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' Run Build Command(s): /usr/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f Makefile -j4 ggml-hip /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/gmake -f CMakeFiles/Makefile2 ggml-hip gmake[1]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/CMakeFiles 47 /usr/bin/gmake -f CMakeFiles/Makefile2 ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/all gmake[2]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [ 2%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o [ 4%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF CMakeFiles/ggml-base.dir/ggml.c.o.d -o CMakeFiles/ggml-base.dir/ggml.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o -MF CMakeFiles/ggml-base.dir/ggml.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5663:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5663 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5656:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5656 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-threading.cpp [ 6%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:4067:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4067 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -MF CMakeFiles/ggml-base.dir/gguf.cpp.o.d -o CMakeFiles/ggml-base.dir/gguf.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-base.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-base.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -Wl,-soname,libggml-base.so -o ../../../../../lib/ollama/libggml-base.so "CMakeFiles/ggml-base.dir/ggml.c.o" "CMakeFiles/ggml-base.dir/ggml.cpp.o" "CMakeFiles/ggml-base.dir/ggml-alloc.c.o" "CMakeFiles/ggml-base.dir/ggml-backend.cpp.o" "CMakeFiles/ggml-base.dir/ggml-opt.cpp.o" "CMakeFiles/ggml-base.dir/ggml-threading.cpp.o" "CMakeFiles/ggml-base.dir/ggml-quants.c.o" "CMakeFiles/ggml-base.dir/gguf.cpp.o" -lm gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [ 8%] Built target ggml-base /usr/bin/gmake -f ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/build.make ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/DependInfo.cmake "--color=" Dependee "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/DependInfo.cmake" is newer than depender "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/depend.internal". Dependee "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/CMakeDirectoryInformation.cmake" is newer than depender "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/depend.internal". Scanning dependencies of target ggml-hip gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/gmake -f ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/build.make ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/arange.cu.o [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/add-id.cu.o [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/acc.cu.o [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/argmax.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/acc.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/acc.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/add-id.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/add-id.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/arange.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/arange.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/argmax.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/argmax.cu [ 12%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/argsort.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/argsort.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/argsort.cu [ 12%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/binbcast.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/binbcast.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cu [ 14%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/clamp.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/clamp.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/clamp.cu [ 14%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/concat.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/concat.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/concat.cu [ 14%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv-transpose-1d.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv-transpose-1d.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/conv-transpose-1d.cu [ 17%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-dw.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-dw.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-dw.cu [ 17%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-transpose.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-transpose.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-transpose.cu [ 19%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/convert.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/convert.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/convert.cu [ 19%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/count-equal.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/count-equal.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/count-equal.cu [ 21%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/cpy.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/cpy.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu [ 21%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/cross-entropy-loss.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/cross-entropy-loss.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/cross-entropy-loss.cu [ 23%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/diagmask.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/diagmask.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/diagmask.cu [ 23%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile-f16.cu [ 23%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f32.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f32.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile-f32.cu [ 25%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-wmma-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-wmma-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu [ 25%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn.cu [ 27%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/getrows.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/getrows.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/getrows.cu [ 27%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/ggml-cuda.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/ggml-cuda.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx1100. 1 warning generated when compiling for gfx1012. 1 warning generated when compiling for gfx1030. 1 warning generated when compiling for gfx1010. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx1101. 1 warning generated when compiling for gfx1200. 1 warning generated when compiling for gfx1102. 1 warning generated when compiling for gfx1151. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ [ 29%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/gla.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/gla.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/gla.cu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx1201. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx908. [ 29%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/im2col.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/im2col.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/im2col.cu 1 warning generated when compiling for gfx906. 1 warning generated when compiling for gfx900. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx90a. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx90a. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ [ 29%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mean.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mean.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mean.cu 1 warning generated when compiling for gfx940. 1 warning generated when compiling for gfx941. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx942. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for host. [ 31%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmf.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmf.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmf.cu [ 31%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmq.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmq.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu [ 34%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvf.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvf.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmvf.cu [ 34%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvq.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvq.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmvq.cu [ 36%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/norm.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/norm.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/norm.cu [ 36%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/opt-step-adamw.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/opt-step-adamw.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-adamw.cu [ 38%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/out-prod.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/out-prod.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/out-prod.cu [ 38%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/pad.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/pad.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/pad.cu [ 38%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/pool2d.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/pool2d.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/pool2d.cu [ 40%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/quantize.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/quantize.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/quantize.cu [ 40%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/roll.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/roll.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/roll.cu [ 42%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/rope.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/rope.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/rope.cu [ 42%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/scale.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/scale.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/scale.cu [ 44%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/set-rows.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/set-rows.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/set-rows.cu [ 44%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/softcap.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/softcap.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/softcap.cu [ 46%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/softmax.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/softmax.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/softmax.cu [ 46%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-conv.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-conv.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ssm-conv.cu [ 46%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-scan.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-scan.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ssm-scan.cu [ 48%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/sum.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/sum.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/sum.cu [ 48%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/sumrows.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/sumrows.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/sumrows.cu [ 51%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/tsembd.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/tsembd.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/tsembd.cu [ 51%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/unary.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/unary.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/unary.cu [ 53%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/upscale.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/upscale.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/upscale.cu [ 53%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/wkv.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/wkv.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/wkv.cu [ 53%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu [ 55%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu [ 55%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu [ 57%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu [ 57%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu [ 59%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu [ 59%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu [ 61%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu [ 61%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu [ 61%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu [ 63%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu [ 63%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu [ 65%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu [ 65%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu [ 68%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu [ 68%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu [ 68%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu [ 70%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu [ 70%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu [ 72%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq1_s.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq1_s.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq1_s.cu [ 72%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_s.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_s.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_s.cu [ 74%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu [ 74%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu [ 76%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_s.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_s.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_s.cu [ 76%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu [ 76%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu [ 78%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu [ 78%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-mxfp4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-mxfp4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-mxfp4.cu [ 80%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q2_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q2_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q2_k.cu [ 80%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q3_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q3_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q3_k.cu [ 82%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_0.cu [ 82%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_1.cu [ 82%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_k.cu [ 85%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_0.cu [ 85%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_1.cu [ 87%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_k.cu [ 87%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q6_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q6_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q6_k.cu [ 89%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q8_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q8_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q8_0.cu [ 89%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu [ 91%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu [ 91%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu [ 91%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu [ 93%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu [ 93%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu [ 95%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu [ 95%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu [ 97%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu [ 97%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_HIP_ROCWMMA_FATTN_GFX12 -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu [100%] Linking HIP shared module ../../../../../../lib/ollama/libggml-hip.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-hip.dir/link.txt --verbose=1 /usr/lib64/rocm/llvm/bin/clang++ -fPIC -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- --hip-link --rtlib=compiler-rt -unwindlib=libgcc -Xlinker --dependency-file=CMakeFiles/ggml-hip.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../../lib/ollama/libggml-hip.so "CMakeFiles/ggml-hip.dir/__/ggml-cuda/acc.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/add-id.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/arange.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/argmax.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/argsort.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/binbcast.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/clamp.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/concat.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv-transpose-1d.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-dw.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-transpose.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/convert.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/count-equal.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/cpy.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/cross-entropy-loss.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/diagmask.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f32.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-wmma-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/getrows.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/ggml-cuda.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/gla.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/im2col.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mean.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmf.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmq.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvf.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvq.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/norm.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/opt-step-adamw.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/out-prod.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/pad.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/pool2d.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/quantize.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/roll.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/rope.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/scale.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/set-rows.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/softcap.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/softmax.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-conv.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-scan.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/sum.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/sumrows.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/tsembd.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/unary.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/upscale.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/wkv.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq1_s.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_s.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_s.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-mxfp4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q2_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q3_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q6_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q8_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/lib/ollama: ../../../../../../liclang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-hardened-ld' [-Wunused-command-line-argument] clang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors' [-Wunused-command-line-argument] clang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1' [-Wunused-command-line-argument] clang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-package-notes' [-Wunused-command-line-argument] b/ollama/libggml-base.so /usr/lib64/libhipblas.so.3.0 /usr/lib64/librocblas.so.5.0 /usr/lib64/libamdhip64.so.7.0.51831 /usr/lib64/libamdhip64.so.7.0.51831 gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [100%] Built target ggml-hip gmake[2]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/CMakeFiles 0 gmake[1]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.Q3bWLc + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + '[' /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT '!=' / ']' + rm -rf /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT ++ dirname /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + mkdir -p /builddir/build/BUILD/ollama-0.12.3-build + mkdir /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml install --destdir /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT --install-directory /usr/share/licenses/ollama --filelist licenses.list Using detector: askalono + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin' + install -m 0755 -vp /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ollama' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig' + install -m 0644 -vp /builddir/build/SOURCES/sysconfig-ollama /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig/ollama '/builddir/build/SOURCES/sysconfig-ollama' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig/ollama' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system' + install -m 0644 -vp /builddir/build/SOURCES/ollama.service /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system/ollama.service '/builddir/build/SOURCES/ollama.service' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system/ollama.service' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d' + install -m 0644 -vp /builddir/build/SOURCES/ollama-user.conf /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d/ollama.conf '/builddir/build/SOURCES/ollama-user.conf' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d/ollama.conf' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib/ollama install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib/ollama' + DESTDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + /usr/bin/cmake --install redhat-linux-build_ggml-cpu --component CPU -- Install configuration: "Release" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-base.so -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so" to "" + DESTDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + /usr/bin/cmake --install redhat-linux-build_ggml-rocm-6 --component HIP -- Install configuration: "Release" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-hip.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-hip.so" to "" + /usr/bin/find-debuginfo -j4 --strict-build-id -m -i --build-id-seed 0.12.3-1.fc44 --unique-debug-suffix -0.12.3-1.fc44.x86_64 --unique-debug-src-base ollama-0.12.3-1.fc44.x86_64 --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 -S debugsourcefiles.list /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 find-debuginfo: starting Extracting debug info from 10 files warning: Unsupported auto-load script at offset 0 in section .debug_gdb_scripts of file /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ollama. Use `info auto-load python-scripts [REGEXP]' to list them. DWARF-compressing 10 files dwz: ./usr/lib64/ollama/libggml-hip.so-0.12.3-1.fc44.x86_64.debug: Unknown debugging section .debug_str_offsets dwz: ./usr/lib64/ollama/libggml-hip.so-0.12.3-1.fc44.x86_64.debug: Unknown debugging section .debug_str_offsets sepdebugcrcfix: Updated 9 CRC32s, 1 CRC32s did match. Creating .debug symlinks for symlinks to ELF files Copying sources found by 'debugedit -l' to /usr/src/debug/ollama-0.12.3-1.fc44.x86_64 find-debuginfo: done + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/redhat/brp-strip-lto /usr/bin/strip + /usr/lib/rpm/check-rpaths + /usr/lib/rpm/redhat/brp-mangle-shebangs + /usr/lib/rpm/brp-remove-la-files + /usr/lib/rpm/redhat/brp-python-rpm-in-distinfo + env /usr/lib/rpm/redhat/brp-python-bytecompile '' 1 0 -j4 + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/bin/add-det --brp -j4 /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT Scanned 497 directories and 1628 files, processed 0 inodes, 0 modified (0 replaced + 0 rewritten), 0 unsupported format, 0 errors + /usr/bin/linkdupes --brp /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr Scanned 491 directories and 1627 files, considered 1627 files, read 165 files, linked 26 files, 0 errors sum of sizes of linked files: 83728 bytes Reading /builddir/build/BUILD/ollama-0.12.3-build/SPECPARTS/rpm-debuginfo.specpart Executing(%check): /bin/sh -e /var/tmp/rpm-tmp.Pe85En + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml report all --verify 'Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSL-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC0-1.0 AND ISC AND LicenseRef-Fedora-Public-Domain AND LicenseRef-scancode-protobuf AND MIT AND NCSA AND NTP AND OpenSSL AND ZPL-2.1 AND Zlib' Using detector: askalono LICENSE: MIT convert/sentencepiece/LICENSE: Apache-2.0 llama/llama.cpp/LICENSE: MIT ml/backend/ggml/ggml/LICENSE: MIT vendor/github.com/agnivade/levenshtein/License.txt: MIT vendor/github.com/apache/arrow/go/arrow/LICENSE.txt: (Apache-2.0 AND BSD-3-Clause) AND BSD-3-Clause AND CC0-1.0 AND (LicenseRef-scancode-public-domain AND MIT) AND Apache-2.0 AND BSL-1.0 AND (BSD-2-Clause AND BSD-3-Clause) AND MIT AND (BSL-1.0 AND BSD-2-Clause) AND BSD-2-Clause AND ZPL-2.1 AND LicenseRef-scancode-protobuf AND NCSA AND (CC-BY-3.0 AND MIT) AND (CC-BY-4.0 AND LicenseRef-scancode-public-domain) AND NTP AND Zlib AND OpenSSL AND (BSD-3-Clause AND BSD-2-Clause) AND (BSD-2-Clause AND Zlib) vendor/github.com/bytedance/sonic/LICENSE: Apache-2.0 vendor/github.com/bytedance/sonic/loader/LICENSE: Apache-2.0 vendor/github.com/chewxy/hm/LICENCE: MIT vendor/github.com/chewxy/math32/LICENSE: BSD-2-Clause vendor/github.com/cloudwego/base64x/LICENSE: Apache-2.0 vendor/github.com/cloudwego/base64x/LICENSE-APACHE: Apache-2.0 vendor/github.com/cloudwego/iasm/LICENSE-APACHE: Apache-2.0 vendor/github.com/containerd/console/LICENSE: Apache-2.0 vendor/github.com/d4l3k/go-bfloat16/LICENSE: MIT vendor/github.com/davecgh/go-spew/LICENSE: ISC vendor/github.com/dlclark/regexp2/LICENSE: MIT vendor/github.com/emirpasic/gods/v2/LICENSE: BSD-2-Clause AND ISC vendor/github.com/gabriel-vasile/mimetype/LICENSE: MIT vendor/github.com/gin-contrib/cors/LICENSE: MIT vendor/github.com/gin-contrib/sse/LICENSE: MIT vendor/github.com/gin-gonic/gin/LICENSE: MIT vendor/github.com/go-playground/locales/LICENSE: MIT vendor/github.com/go-playground/universal-translator/LICENSE: MIT vendor/github.com/go-playground/validator/v10/LICENSE: MIT vendor/github.com/goccy/go-json/LICENSE: MIT vendor/github.com/gogo/protobuf/LICENSE: BSD-3-Clause vendor/github.com/golang/protobuf/LICENSE: BSD-3-Clause vendor/github.com/google/flatbuffers/LICENSE: Apache-2.0 vendor/github.com/google/go-cmp/LICENSE: BSD-3-Clause vendor/github.com/google/uuid/LICENSE: BSD-3-Clause vendor/github.com/inconshreveable/mousetrap/LICENSE: Apache-2.0 vendor/github.com/json-iterator/go/LICENSE: MIT vendor/github.com/klauspost/cpuid/v2/LICENSE: MIT vendor/github.com/leodido/go-urn/LICENSE: MIT vendor/github.com/mattn/go-isatty/LICENSE: MIT vendor/github.com/mattn/go-runewidth/LICENSE: MIT vendor/github.com/modern-go/concurrent/LICENSE: Apache-2.0 vendor/github.com/modern-go/reflect2/LICENSE: Apache-2.0 vendor/github.com/nlpodyssey/gopickle/LICENSE: BSD-2-Clause vendor/github.com/olekukonko/tablewriter/LICENSE.md: MIT vendor/github.com/pdevine/tensor/LICENCE: Apache-2.0 vendor/github.com/pelletier/go-toml/v2/LICENSE: MIT vendor/github.com/pkg/errors/LICENSE: BSD-2-Clause vendor/github.com/pmezard/go-difflib/LICENSE: BSD-3-Clause vendor/github.com/rivo/uniseg/LICENSE.txt: MIT vendor/github.com/spf13/cobra/LICENSE.txt: Apache-2.0 vendor/github.com/spf13/pflag/LICENSE: BSD-3-Clause vendor/github.com/stretchr/testify/LICENSE: MIT vendor/github.com/twitchyliquid64/golang-asm/LICENSE: BSD-3-Clause vendor/github.com/ugorji/go/codec/LICENSE: MIT vendor/github.com/x448/float16/LICENSE: MIT vendor/github.com/xtgo/set/LICENSE: BSD-2-Clause vendor/go4.org/unsafe/assume-no-moving-gc/LICENSE: BSD-3-Clause vendor/golang.org/x/arch/LICENSE: BSD-3-Clause vendor/golang.org/x/crypto/LICENSE: BSD-3-Clause vendor/golang.org/x/exp/LICENSE: BSD-3-Clause vendor/golang.org/x/image/LICENSE: BSD-3-Clause vendor/golang.org/x/net/LICENSE: BSD-3-Clause vendor/golang.org/x/sync/LICENSE: BSD-3-Clause vendor/golang.org/x/sys/LICENSE: BSD-3-Clause vendor/golang.org/x/term/LICENSE: BSD-3-Clause vendor/golang.org/x/text/LICENSE: BSD-3-Clause vendor/golang.org/x/tools/LICENSE: BSD-3-Clause vendor/golang.org/x/xerrors/LICENSE: BSD-3-Clause vendor/gonum.org/v1/gonum/LICENSE: BSD-3-Clause vendor/google.golang.org/protobuf/LICENSE: BSD-3-Clause vendor/gopkg.in/yaml.v3/LICENSE: MIT AND (MIT AND Apache-2.0) vendor/gorgonia.org/vecf32/LICENSE: MIT vendor/gorgonia.org/vecf64/LICENSE: MIT Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSL-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC0-1.0 AND ISC AND LicenseRef-Fedora-Public-Domain AND LicenseRef-scancode-protobuf AND MIT AND NCSA AND NTP AND OpenSSL AND ZPL-2.1 AND Zlib + GO_LDFLAGS=' -X github.com/ollama/ollama/version=0.12.3' + GO_TEST_FLAGS='-buildmode pie -compiler gc' + GO_TEST_EXT_LD_FLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + go-rpm-integration check -i github.com/ollama/ollama -b /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin -s /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build -V 0.12.3-1.fc44 -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT -g /usr/share/gocode -r '.*example.*' Testing in: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src PATH: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin:/usr/bin:/bin:/usr/sbin:/sbin GOPATH: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode GO111MODULE: off command: go test -buildmode pie -compiler gc -ldflags " -X github.com/ollama/ollama/version=0.12.3 -extldflags '-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes '" testing: github.com/ollama/ollama github.com/ollama/ollama/api 2025/10/04 05:49:29 http: superfluous response.WriteHeader call from github.com/ollama/ollama/api.TestClientStream.func1.1 (client_test.go:128) PASS ok github.com/ollama/ollama/api 0.011s github.com/ollama/ollama/api 2025/10/04 05:49:29 http: superfluous response.WriteHeader call from github.com/ollama/ollama/api.TestClientStream.func1.1 (client_test.go:128) PASS ok github.com/ollama/ollama/api 0.011s github.com/ollama/ollama/app/assets ? github.com/ollama/ollama/app/assets [no test files] github.com/ollama/ollama/app/lifecycle PASS ok github.com/ollama/ollama/app/lifecycle 0.003s github.com/ollama/ollama/app/lifecycle PASS ok github.com/ollama/ollama/app/lifecycle 0.003s github.com/ollama/ollama/app/store ? github.com/ollama/ollama/app/store [no test files] github.com/ollama/ollama/app/tray ? github.com/ollama/ollama/app/tray [no test files] github.com/ollama/ollama/app/tray/commontray ? github.com/ollama/ollama/app/tray/commontray [no test files] github.com/ollama/ollama/auth ? github.com/ollama/ollama/auth [no test files] github.com/ollama/ollama/cmd [?25l[?2026h[?25l[?25h[?2026l[?25hdeleted 'test-model' [?25l[?2026h[?25l[?25h[?2026l[?25hCouldn't find '/tmp/TestPushHandlersuccessful_push3941978203/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOqIzZrXuqbAJ/NGLGhtXlrsaqVleYldgbpxuwig0yE4 Couldn't find '/tmp/TestPushHandlernot_signed_in_push3405887526/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICDZstO26Qrv2GuCj9a4KXPgxz16O0P9rYMCtzVSp3Zi Couldn't find '/tmp/TestPushHandlerunauthorized_push963916843/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBsw/tY3iYpXIWRs28Dd4pl6sF6dP5wvRm4+6ZATyG7e Added image '/tmp/TestExtractFileDataRemovesQuotedFilepath3116287578/001/img.jpg' PASS ok github.com/ollama/ollama/cmd 0.025s github.com/ollama/ollama/cmd [?25l[?2026h[?25l[?25h[?2026l[?25hdeleted 'test-model' [?25l[?2026h[?25l[?25h[?2026l[?25hCouldn't find '/tmp/TestPushHandlersuccessful_push1705789192/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHCPLCYhAGXqqjyphOY72JIPTl6BTRqkqA8QAiDPHUkf Couldn't find '/tmp/TestPushHandlernot_signed_in_push3127998918/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMl8Py3ivIVF6rkbnBgINKKlSbv6Wn/L42mqgKmKQp1K Couldn't find '/tmp/TestPushHandlerunauthorized_push1462155311/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAbc1j94Z5x6boyuUgOc6q4pH0cCFeBCnqSH2HJLXSfT Added image '/tmp/TestExtractFileDataRemovesQuotedFilepath3627491172/001/img.jpg' PASS ok github.com/ollama/ollama/cmd 0.024s github.com/ollama/ollama/convert PASS ok github.com/ollama/ollama/convert 0.015s github.com/ollama/ollama/convert PASS ok github.com/ollama/ollama/convert 0.015s github.com/ollama/ollama/convert/sentencepiece ? github.com/ollama/ollama/convert/sentencepiece [no test files] github.com/ollama/ollama/discover 2025/10/04 05:51:59 INFO example scenario="#5554 LXC direct output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:51:59 INFO example scenario="#5554 LXC docker container output" cpus="[{ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:29 EfficiencyCoreCount:0 ThreadCount:29}]" 2025/10/04 05:51:59 INFO example scenario="#5554 LXC docker output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:51:59 INFO example scenario="#7359 VMware multi-core core VM" cpus="[{ID:0 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:10 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:12 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:14 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:2 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:4 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:6 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:8 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1}]" 2025/10/04 05:51:59 INFO example scenario="#7287 HyperV 2 socket exposed to VM" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:51:59 INFO example scenario="#5554 Docker Ollama container inside the LXC" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:51:59 INFO looking for compatible GPUs 2025/10/04 05:51:59 INFO no compatible GPUs were discovered PASS ok github.com/ollama/ollama/discover 0.007s github.com/ollama/ollama/discover 2025/10/04 05:51:59 INFO example scenario="#5554 Docker Ollama container inside the LXC" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:51:59 INFO example scenario="#5554 LXC direct output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:51:59 INFO example scenario="#5554 LXC docker container output" cpus="[{ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:29 EfficiencyCoreCount:0 ThreadCount:29}]" 2025/10/04 05:51:59 INFO example scenario="#5554 LXC docker output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:51:59 INFO example scenario="#7359 VMware multi-core core VM" cpus="[{ID:0 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:10 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:12 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:14 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:2 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:4 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:6 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:8 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1}]" 2025/10/04 05:51:59 INFO example scenario="#7287 HyperV 2 socket exposed to VM" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:51:59 INFO looking for compatible GPUs 2025/10/04 05:51:59 INFO no compatible GPUs were discovered PASS ok github.com/ollama/ollama/discover 0.008s github.com/ollama/ollama/envconfig 2025/10/04 05:51:59 WARN invalid port, using default port=66000 default=11434 2025/10/04 05:51:59 WARN invalid port, using default port=-1 default=11434 2025/10/04 05:51:59 WARN invalid environment variable, using default key=OLLAMA_UINT value=0x10 default=11434 2025/10/04 05:51:59 WARN invalid environment variable, using default key=OLLAMA_UINT value=string default=11434 2025/10/04 05:51:59 WARN invalid environment variable, using default key=OLLAMA_UINT value=-1 default=11434 2025/10/04 05:51:59 WARN invalid environment variable, using default key=OLLAMA_UINT value=0o10 default=11434 PASS ok github.com/ollama/ollama/envconfig 0.005s github.com/ollama/ollama/envconfig 2025/10/04 05:52:00 WARN invalid port, using default port=-1 default=11434 2025/10/04 05:52:00 WARN invalid port, using default port=66000 default=11434 2025/10/04 05:52:00 WARN invalid environment variable, using default key=OLLAMA_UINT value=string default=11434 2025/10/04 05:52:00 WARN invalid environment variable, using default key=OLLAMA_UINT value=-1 default=11434 2025/10/04 05:52:00 WARN invalid environment variable, using default key=OLLAMA_UINT value=0o10 default=11434 2025/10/04 05:52:00 WARN invalid environment variable, using default key=OLLAMA_UINT value=0x10 default=11434 PASS ok github.com/ollama/ollama/envconfig 0.005s github.com/ollama/ollama/format PASS ok github.com/ollama/ollama/format 0.003s github.com/ollama/ollama/format PASS ok github.com/ollama/ollama/format 0.002s github.com/ollama/ollama/fs ? github.com/ollama/ollama/fs [no test files] github.com/ollama/ollama/fs/ggml PASS ok github.com/ollama/ollama/fs/ggml 0.005s github.com/ollama/ollama/fs/ggml PASS ok github.com/ollama/ollama/fs/ggml 0.006s github.com/ollama/ollama/fs/gguf PASS ok github.com/ollama/ollama/fs/gguf 0.004s github.com/ollama/ollama/fs/gguf PASS ok github.com/ollama/ollama/fs/gguf 0.004s github.com/ollama/ollama/fs/util/bufioutil PASS ok github.com/ollama/ollama/fs/util/bufioutil 0.002s github.com/ollama/ollama/fs/util/bufioutil PASS ok github.com/ollama/ollama/fs/util/bufioutil 0.002s github.com/ollama/ollama/harmony event: {} event: {Header:{Role:user Channel: Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_weather}} event: {Content:{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|message|>{"sunny": true, "temperature": 20}} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:User asks weather in SF. We need location. Use get_current_weather with location "San Francisco, CA".} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_current_weather}} event: {Content:{"location":"San Francisco, CA"}<|call|>} PASS ok github.com/ollama/ollama/harmony 0.003s github.com/ollama/ollama/harmony event: {} event: {Header:{Role:user Channel: Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_weather}} event: {Content:{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|message|>{"sunny": true, "temperature": 20}} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:User asks weather in SF. We need location. Use get_current_weather with location "San Francisco, CA".} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_current_weather}} event: {Content:{"location":"San Francisco, CA"}<|call|>} PASS ok github.com/ollama/ollama/harmony 0.004s github.com/ollama/ollama/kvcache PASS ok github.com/ollama/ollama/kvcache 0.002s github.com/ollama/ollama/kvcache PASS ok github.com/ollama/ollama/kvcache 0.003s github.com/ollama/ollama/llama PASS ok github.com/ollama/ollama/llama 0.004s github.com/ollama/ollama/llama PASS ok github.com/ollama/ollama/llama 0.004s github.com/ollama/ollama/llama/llama.cpp/common ? github.com/ollama/ollama/llama/llama.cpp/common [no test files] github.com/ollama/ollama/llama/llama.cpp/src ? github.com/ollama/ollama/llama/llama.cpp/src [no test files] github.com/ollama/ollama/llama/llama.cpp/tools/mtmd ? github.com/ollama/ollama/llama/llama.cpp/tools/mtmd [no test files] github.com/ollama/ollama/llm 2025/10/04 05:52:10 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:10 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:10 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:10 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:10 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:10 INFO aborting completion request due to client closing the connection PASS ok github.com/ollama/ollama/llm 0.006s github.com/ollama/ollama/llm 2025/10/04 05:52:11 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:11 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:11 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:11 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:11 INFO aborting completion request due to client closing the connection 2025/10/04 05:52:11 INFO aborting completion request due to client closing the connection PASS ok github.com/ollama/ollama/llm 0.006s github.com/ollama/ollama/logutil ? github.com/ollama/ollama/logutil [no test files] github.com/ollama/ollama/ml ? github.com/ollama/ollama/ml [no test files] github.com/ollama/ollama/ml/backend ? github.com/ollama/ollama/ml/backend [no test files] github.com/ollama/ollama/ml/backend/ggml ? github.com/ollama/ollama/ml/backend/ggml [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src ? github.com/ollama/ollama/ml/backend/ggml/ggml/src [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile [no test files] github.com/ollama/ollama/ml/nn ? github.com/ollama/ollama/ml/nn [no test files] github.com/ollama/ollama/ml/nn/fast ? github.com/ollama/ollama/ml/nn/fast [no test files] github.com/ollama/ollama/ml/nn/pooling 2025/10/04 05:52:14 INFO looking for compatible GPUs 2025/10/04 05:52:14 INFO no compatible GPUs were discovered 2025/10/04 05:52:14 INFO architecture=test file_type=unknown name="" description="" num_tensors=1 num_key_values=3 2025/10/04 05:52:14 INFO system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) PASS ok github.com/ollama/ollama/ml/nn/pooling 0.009s github.com/ollama/ollama/ml/nn/pooling 2025/10/04 05:52:14 INFO looking for compatible GPUs 2025/10/04 05:52:14 INFO no compatible GPUs were discovered 2025/10/04 05:52:14 INFO architecture=test file_type=unknown name="" description="" num_tensors=1 num_key_values=3 2025/10/04 05:52:14 INFO system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) PASS ok github.com/ollama/ollama/ml/nn/pooling 0.009s github.com/ollama/ollama/ml/nn/rope ? github.com/ollama/ollama/ml/nn/rope [no test files] github.com/ollama/ollama/model time=2025-10-04T05:52:15.906Z level=DEBUG msg="adding bos token to prompt" id=1 time=2025-10-04T05:52:15.906Z level=DEBUG msg="adding eos token to prompt" id=2 PASS ok github.com/ollama/ollama/model 0.481s github.com/ollama/ollama/model time=2025-10-04T05:52:16.677Z level=DEBUG msg="adding bos token to prompt" id=1 time=2025-10-04T05:52:16.677Z level=DEBUG msg="adding eos token to prompt" id=2 PASS ok github.com/ollama/ollama/model 0.261s github.com/ollama/ollama/model/imageproc PASS ok github.com/ollama/ollama/model/imageproc 0.023s github.com/ollama/ollama/model/imageproc PASS ok github.com/ollama/ollama/model/imageproc 0.024s github.com/ollama/ollama/model/input ? github.com/ollama/ollama/model/input [no test files] github.com/ollama/ollama/model/models ? github.com/ollama/ollama/model/models [no test files] github.com/ollama/ollama/model/models/bert ? github.com/ollama/ollama/model/models/bert [no test files] github.com/ollama/ollama/model/models/deepseek2 ? github.com/ollama/ollama/model/models/deepseek2 [no test files] github.com/ollama/ollama/model/models/gemma2 ? github.com/ollama/ollama/model/models/gemma2 [no test files] github.com/ollama/ollama/model/models/gemma3 ? github.com/ollama/ollama/model/models/gemma3 [no test files] github.com/ollama/ollama/model/models/gemma3n ? github.com/ollama/ollama/model/models/gemma3n [no test files] github.com/ollama/ollama/model/models/gptoss ? github.com/ollama/ollama/model/models/gptoss [no test files] github.com/ollama/ollama/model/models/llama ? github.com/ollama/ollama/model/models/llama [no test files] github.com/ollama/ollama/model/models/llama4 PASS ok github.com/ollama/ollama/model/models/llama4 0.013s github.com/ollama/ollama/model/models/llama4 PASS ok github.com/ollama/ollama/model/models/llama4 0.014s github.com/ollama/ollama/model/models/mistral3 ? github.com/ollama/ollama/model/models/mistral3 [no test files] github.com/ollama/ollama/model/models/mllama PASS ok github.com/ollama/ollama/model/models/mllama 0.514s github.com/ollama/ollama/model/models/mllama PASS ok github.com/ollama/ollama/model/models/mllama 0.531s github.com/ollama/ollama/model/models/qwen2 ? github.com/ollama/ollama/model/models/qwen2 [no test files] github.com/ollama/ollama/model/models/qwen25vl ? github.com/ollama/ollama/model/models/qwen25vl [no test files] github.com/ollama/ollama/model/models/qwen3 ? github.com/ollama/ollama/model/models/qwen3 [no test files] github.com/ollama/ollama/model/parsers PASS ok github.com/ollama/ollama/model/parsers 0.004s github.com/ollama/ollama/model/parsers PASS ok github.com/ollama/ollama/model/parsers 0.004s github.com/ollama/ollama/model/renderers PASS ok github.com/ollama/ollama/model/renderers 0.004s github.com/ollama/ollama/model/renderers PASS ok github.com/ollama/ollama/model/renderers 0.004s github.com/ollama/ollama/openai PASS ok github.com/ollama/ollama/openai 0.012s github.com/ollama/ollama/openai PASS ok github.com/ollama/ollama/openai 0.011s github.com/ollama/ollama/parser PASS ok github.com/ollama/ollama/parser 0.007s github.com/ollama/ollama/parser PASS ok github.com/ollama/ollama/parser 0.006s github.com/ollama/ollama/progress ? github.com/ollama/ollama/progress [no test files] github.com/ollama/ollama/readline ? github.com/ollama/ollama/readline [no test files] github.com/ollama/ollama/runner ? github.com/ollama/ollama/runner [no test files] github.com/ollama/ollama/runner/common PASS ok github.com/ollama/ollama/runner/common 0.002s github.com/ollama/ollama/runner/common PASS ok github.com/ollama/ollama/runner/common 0.002s github.com/ollama/ollama/runner/llamarunner PASS ok github.com/ollama/ollama/runner/llamarunner 0.005s github.com/ollama/ollama/runner/llamarunner PASS ok github.com/ollama/ollama/runner/llamarunner 0.005s github.com/ollama/ollama/runner/ollamarunner PASS ok github.com/ollama/ollama/runner/ollamarunner 0.005s github.com/ollama/ollama/runner/ollamarunner PASS ok github.com/ollama/ollama/runner/ollamarunner 0.005s github.com/ollama/ollama/sample PASS ok github.com/ollama/ollama/sample 0.188s github.com/ollama/ollama/sample PASS ok github.com/ollama/ollama/sample 0.191s github.com/ollama/ollama/server time=2025-10-04T05:52:31.537Z level=INFO source=logging.go:32 msg="ollama app started" time=2025-10-04T05:52:31.538Z level=DEBUG source=convert.go:232 msg="vocabulary is smaller than expected, padding with dummy tokens" expect=32000 actual=1 time=2025-10-04T05:52:31.545Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=general.file_type type=uint32 time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=general.quantization_version type=uint32 time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=llama.vocab_size type=uint32 time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.pre type=string time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:31.545Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:31.575Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.575Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.577Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.577Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.578Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.578Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.578Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:52:31.578Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.578Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:52:31.578Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.578Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.578Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.578Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.578Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.578Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:52:31.578Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.578Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:52:31.578Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.579Z level=ERROR source=images.go:157 msg="unknown capability" capability=unknown time=2025-10-04T05:52:31.580Z level=WARN source=manifest.go:160 msg="bad manifest name" path=host/namespace/model/.hidden time=2025-10-04T05:52:31.581Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:52:31.581Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:52:31.581Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:52:31.581Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=4 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:31.582Z level=WARN source=quantization.go:145 msg="tensor cols 100 are not divisible by 32, required for Q8_0 - using fallback quantization F16" time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.582Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.582Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[512 2]" offset=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:31.582Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=output.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:52:31.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.583Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.583Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:52:31.583Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:52:31.583Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=12 shape="[512 2]" offset=0 time=2025-10-04T05:52:31.583Z level=DEBUG source=gguf.go:627 msg=output.weight kind=14 shape="[256 4]" offset=576 time=2025-10-04T05:52:31.583Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.583Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.583Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.583Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape="[512 2]" offset=0 time=2025-10-04T05:52:31.583Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=4096 time=2025-10-04T05:52:31.583Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.583Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:31.583Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=blk.0.attn_v.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:52:31.583Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.583Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=14 shape="[512 2]" offset=0 time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=864 time=2025-10-04T05:52:31.584Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.584Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[32 16 2]" offset=0 time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:52:31.584Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.584Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:31.584Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:31.584Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:52:31.584Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:52:31.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=8 shape="[32 16 2]" offset=0 time=2025-10-04T05:52:31.585Z level=DEBUG source=gguf.go:627 msg=output.weight kind=8 shape="[256 4]" offset=1088 time=2025-10-04T05:52:31.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.587Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.588Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.589Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.591Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.592Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.592Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.593Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:31.593Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.593Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.593Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.593Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.594Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:52:31.595Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:31.595Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.595Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.595Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.595Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:52:31.595Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.595Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.595Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.598Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.598Z level=DEBUG source=create.go:98 msg="create model from model name" from=bob resp = api.ShowResponse{License:"", Modelfile:"# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM test:latest\n\nFROM \nTEMPLATE {{ .Prompt }}\n", Parameters:"", Template:"{{ .Prompt }}", System:"", Renderer:"", Parser:"", Details:api.ModelDetails{ParentModel:"", Format:"", Family:"gptoss", Families:[]string{"gptoss"}, ParameterSize:"20.9B", QuantizationLevel:"MXFP4"}, Messages:[]api.Message(nil), RemoteModel:"bob", RemoteHost:"https://ollama.com:11434", ModelInfo:map[string]interface {}{"general.architecture":"gptoss", "gptoss.context_length":131072, "gptoss.embedding_length":2880}, ProjectorInfo:map[string]interface {}(nil), Tensors:[]api.Tensor(nil), Capabilities:[]model.Capability{"completion", "tools", "thinking"}, ModifiedAt:time.Date(2025, time.October, 4, 5, 52, 31, 599023714, time.UTC)} time=2025-10-04T05:52:31.599Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.599Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.600Z level=DEBUG source=gguf.go:578 msg=tokenizer.chat_template type=string time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.600Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.609Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.610Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.610Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.611Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:31.611Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:31.611Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:31.611Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.611Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.611Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.611Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.611Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.613Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-04T05:52:31.613Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-10-04T05:52:31.613Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so* time=2025-10-04T05:52:31.613Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build2474452555/b001/libcuda.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-10-04T05:52:31.614Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:52:31.614Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcudart.so* time=2025-10-04T05:52:31.614Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build2474452555/b001/libcudart.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcudart.so* /tmp/go-build2474452555/b001/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-10-04T05:52:31.614Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:52:31.614Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-10-04T05:52:31.614Z level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-04T05:52:31.614Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:52:31.614Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.614Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.616Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.616Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.616Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.618Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.618Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.618Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.620Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.620Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.620Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.622Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.622Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.624Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.624Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.626Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.626Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.627Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.627Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.629Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.629Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.631Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.631Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.633Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.633Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.633Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.635Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.635Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly2378395301/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.636Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:31.636Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:31.636Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:31.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.636Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.636Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:31.636Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:31.636Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:31.636Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:31.636Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:31.636Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:31.636Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:31.636Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:31.637Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:31.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.638Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.638Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.638Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.640Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.640Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.641Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.641Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.641Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.644Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.644Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.644Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.645Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.646Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.646Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.647Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.647Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.647Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.649Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.649Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.649Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.651Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.651Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.653Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.653Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.654Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.655Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.655Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.655Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly971626815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.657Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:31.657Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:31.657Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.658Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.658Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.658Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.658Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:31.659Z level=DEBUG source=manifest.go:53 msg="layer does not exist" digest=sha256:776957f9c9239232f060e29d642d8f5ef3bb931f485c27a13ae6385515fb425c time=2025-10-04T05:52:31.659Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:31.659Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:31.659Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:31.660Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:31.660Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:31.660Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:31.660Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:31.660Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.660Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.660Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.660Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.660Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.661Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.661Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:52:31.661Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.661Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.661Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.661Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:52:31.661Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:52:31.661Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.661Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.661Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.663Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.663Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.663Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3352798715/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.665Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.665Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.665Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3352798715/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.666Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:31.666Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.666Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.667Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.667Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.667Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3352798715/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.669Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.669Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.669Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3352798715/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.671Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.671Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.671Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3352798715/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.674Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.674Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.674Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3352798715/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.676Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.676Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.676Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3352798715/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.708Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:31.708Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:31.708Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:31.708Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:31.708Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:31.708Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.708Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.708Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.708Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.708Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.709Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.709Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:52:31.709Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.709Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.709Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.709Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:52:31.709Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:52:31.709Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.709Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.709Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.712Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.712Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.712Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3505008902/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.714Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.714Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.714Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3505008902/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.715Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:31.715Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.715Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.716Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.716Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.716Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3505008902/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.718Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.718Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.718Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3505008902/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.720Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.720Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.721Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3505008902/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.722Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:31.722Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.722Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.723Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.723Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.723Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3505008902/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.725Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.725Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.725Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3505008902/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.727Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.727Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.727Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3505008902/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.728Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:31.728Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:31.729Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:31.729Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:31.729Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:31.729Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.729Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.729Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.729Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.729Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.730Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.730Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.730Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag538228407/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.732Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.733Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.733Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag538228407/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.734Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.735Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.735Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag538228407/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.736Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.736Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.736Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag538228407/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.738Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.738Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.738Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag538228407/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:31.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.780Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:31.780Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:31.780Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.780Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:31.780Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:31.780Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:31.780Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:31.781Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:31.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:31.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:31.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.783Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.783Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimecontent_streams_as_it_arr1536776240/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:31.874Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:31.874Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:31.875Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:31.875Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:31.875Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:31.875Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.875Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.875Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:31.875Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:31.875Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.876Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:31.876Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:31.877Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:31.877Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.877Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimethinking_streams_separate1748509708/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:31.998Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:31.998Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:31.998Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:31.998Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.998Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:31.998Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:31.998Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:31.998Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:31.998Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:31.998Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:31.998Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:31.998Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:31.998Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:31.999Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:31.999Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:31.999Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:31.999Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:31.999Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:31.999Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.000Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.000Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.001Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:32.001Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.001Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimepartial_tags_buffer_until3339746018/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:32.153Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.153Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.153Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.153Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.153Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.153Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.153Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.153Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.153Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:32.154Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:32.154Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.154Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.154Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:32.154Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:32.154Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.154Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.154Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.155Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:32.155Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.155Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimesimple_assistant_after_an2623724244/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:32.186Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.186Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.186Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:32.186Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:32.187Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:32.187Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:32.187Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:32.187Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:32.187Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:32.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:32.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:32.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.188Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:32.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.188Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_parsed_and_retu3125111434/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:32.219Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.219Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:32.219Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:32.219Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:32.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:32.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:32.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.221Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:32.221Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.221Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_with_streaming_3207067175/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:32.312Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.312Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.313Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:32.313Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:32.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:32.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:32.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.314Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:32.314Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.314Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingSimple3238900954/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:32.315Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.315Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.315Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.315Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:32.315Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:32.316Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.316Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.316Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:32.316Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:32.316Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.316Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.316Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.317Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:32.317Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.317Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingsimple_message_without_thinking970822574/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:32.318Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.318Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.318Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:32.318Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:32.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:32.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:32.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.319Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:32.320Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.320Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingmessage_with_analysis_channel_for2587672329/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:32.320Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.320Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.320Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.320Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:32.320Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:32.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:32.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:32.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.321Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:32.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.321Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingstreaming_with_partial_tags_acros293697628/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.322Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.322Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:52:32 | 200 | 23.454µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/04 - 05:52:32 | 200 | 59.506µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/04 - 05:52:32 | 200 | 87.63µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:52:32 | 200 | 215.814µs | 127.0.0.1 | GET "/api/tags" time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.332Z level=INFO source=images.go:518 msg="total blobs: 3" time=2025-10-04T05:52:32.332Z level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-04T05:52:32.332Z level=INFO source=server.go:164 msg=http status=200 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:43040 proto=HTTP/1.1 query="" time=2025-10-04T05:52:32.332Z level=WARN source=server.go:164 msg=http error="model not found" status=404 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:43040 proto=HTTP/1.1 query="" [GIN] 2025/10/04 - 05:52:32 | 200 | 147.903µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:52:32 | 200 | 369.395µs | 127.0.0.1 | POST "/api/create" time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.334Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:52:32 | 200 | 351.373µs | 127.0.0.1 | POST "/api/copy" time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.336Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:52:32 | 200 | 750.672µs | 127.0.0.1 | POST "/api/show" time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.338Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:52:32 | 200 | 553.597µs | 127.0.0.1 | GET "/v1/models/show-model" [GIN] 2025/10/04 - 05:52:32 | 405 | 763ns | 127.0.0.1 | GET "/api/show" time=2025-10-04T05:52:32.338Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.341Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.341Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.341Z level=DEBUG source=gguf.go:578 msg=general.type type=string time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.block_count default=0 time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.vision.block_count default=0 time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.block_count default=0 time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.vision.block_count default=0 time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:32.342Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.342Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.344Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:52:32.344Z level=INFO source=sched.go:417 msg="NewLlamaServer failed" model=foo error="something failed to load model blah: this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull `" time=2025-10-04T05:52:32.344Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:52:32.344Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:32.344Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:32.344Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open dummy_model_path: no such file or directory" time=2025-10-04T05:52:32.344Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:52:32.344Z level=ERROR source=sched.go:476 msg="error loading llama server" error="wait failure" time=2025-10-04T05:52:32.344Z level=DEBUG source=sched.go:478 msg="triggering expiration for failed load" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=dummy_model_path runner.num_ctx=4096 time=2025-10-04T05:52:32.345Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.345Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.345Z level=INFO source=sched_test.go:179 msg=a time=2025-10-04T05:52:32.345Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.345Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.345Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSameModelSameRequest2732733850/002/2021574830 time=2025-10-04T05:52:32.346Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:32.346Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest2732733850/002/2021574830 runner.num_ctx=4096 time=2025-10-04T05:52:32.346Z level=INFO source=sched_test.go:196 msg=b time=2025-10-04T05:52:32.346Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSameModelSameRequest2732733850/002/2021574830 time=2025-10-04T05:52:32.346Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.346Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.346Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.346Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest2732733850/002/2021574830 runner.num_ctx=4096 time=2025-10-04T05:52:32.346Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.346Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.346Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.346Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.346Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.346Z level=INFO source=sched_test.go:223 msg=a time=2025-10-04T05:52:32.346Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.346Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.346Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 time=2025-10-04T05:52:32.346Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:32.347Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 time=2025-10-04T05:52:32.347Z level=INFO source=sched_test.go:241 msg=b time=2025-10-04T05:52:32.347Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 time=2025-10-04T05:52:32.347Z level=DEBUG source=sched.go:154 msg=reloading runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 time=2025-10-04T05:52:32.347Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:52:32.347Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 time=2025-10-04T05:52:32.348Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 time=2025-10-04T05:52:32.348Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="20 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1229065733/002/944382297 runner.num_ctx=4096 time=2025-10-04T05:52:32.348Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.348Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.348Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.348Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.348Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.349Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.349Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.349Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.349Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.349Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.349Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.349Z level=INFO source=sched_test.go:274 msg=a time=2025-10-04T05:52:32.349Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.349Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.349Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 time=2025-10-04T05:52:32.349Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:32.349Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 time=2025-10-04T05:52:32.349Z level=INFO source=sched_test.go:293 msg=b time=2025-10-04T05:52:32.349Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:52:32.350Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.350Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:52:32.350Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:52:32.350Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:52:32.350Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:52:32.350Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 time=2025-10-04T05:52:32.350Z level=INFO source=sched_test.go:311 msg=c time=2025-10-04T05:52:32.350Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.350Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=cpu available="24.2 GiB" time=2025-10-04T05:52:32.350Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=cpu total="29.8 GiB" available="19.6 GiB" time=2025-10-04T05:52:32.350Z level=INFO source=sched.go:470 msg="loaded runners" count=3 time=2025-10-04T05:52:32.350Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:52:32.350Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-4a runner.inference=cpu runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/006/2680125609 runner.num_ctx=4096 time=2025-10-04T05:52:32.350Z level=INFO source=sched_test.go:329 msg=d time=2025-10-04T05:52:32.350Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.350Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:52:32.350Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:162 msg="max runners achieved, unloading one to make room" runner_count=3 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 runner.num_ctx=4096 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 time=2025-10-04T05:52:32.352Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/002/4138825052 time=2025-10-04T05:52:32.352Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.353Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:52:32.353Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="3.7 GiB" time=2025-10-04T05:52:32.353Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 time=2025-10-04T05:52:32.353Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:52:32.353Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 runner.num_ctx=4096 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/004/2461142227 time=2025-10-04T05:52:32.358Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:52:32.358Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:52:32.358Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3c runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1468297811/008/894162964 runner.num_ctx=4096 time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.358Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.359Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.359Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.359Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.359Z level=INFO source=sched_test.go:367 msg=a time=2025-10-04T05:52:32.360Z level=INFO source=sched_test.go:370 msg=b time=2025-10-04T05:52:32.360Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.360Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.360Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGetRunner3506419791/002/1458268995 time=2025-10-04T05:52:32.360Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:32.360Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 runner.num_ctx=4096 time=2025-10-04T05:52:32.360Z level=INFO source=sched_test.go:394 msg=c time=2025-10-04T05:52:32.360Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open bad path: no such file or directory" time=2025-10-04T05:52:32.360Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.360Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 runner.num_ctx=4096 duration=2ms time=2025-10-04T05:52:32.360Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:32.362Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 runner.num_ctx=4096 time=2025-10-04T05:52:32.362Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 runner.num_ctx=4096 time=2025-10-04T05:52:32.362Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 runner.num_ctx=4096 time=2025-10-04T05:52:32.362Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 runner.num_ctx=4096 time=2025-10-04T05:52:32.362Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 runner.num_ctx=4096 time=2025-10-04T05:52:32.362Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 time=2025-10-04T05:52:32.362Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3506419791/002/1458268995 time=2025-10-04T05:52:32.362Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:52:32.410Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.410Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.411Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:52:32.411Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:32.411Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:32.411Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:32.411Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:32.411Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:32.411Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:32.411Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:32.411Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:32.411Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:52:32.411Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:52:32.431Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.431Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.431Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.431Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.431Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.431Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.431Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.432Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestPrematureExpired2595981550/002/910866558 time=2025-10-04T05:52:32.432Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:32.432Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 time=2025-10-04T05:52:32.432Z level=INFO source=sched_test.go:481 msg="sending premature expired event now" time=2025-10-04T05:52:32.432Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 time=2025-10-04T05:52:32.432Z level=DEBUG source=sched.go:310 msg="expired event with positive ref count, retrying" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:52:32.437Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:32.437Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:52:32.437Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 runner.num_ctx=4096 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 time=2025-10-04T05:52:32.442Z level=DEBUG source=sched.go:332 msg="duplicate expired event, ignoring" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2595981550/002/910866558 time=2025-10-04T05:52:32.467Z level=ERROR source=sched.go:272 msg="finished request signal received after model unloaded" modelPath=/tmp/TestPrematureExpired2595981550/002/910866558 time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=1 library=a available="900 B" time=2025-10-04T05:52:32.473Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=1 library=a total="1000 B" available="825 B" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=2 library=a available="1.9 KiB" time=2025-10-04T05:52:32.473Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=2 library=a total="2.0 KiB" available="1.8 KiB" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=a time=2025-10-04T05:52:32.473Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=b time=2025-10-04T05:52:32.473Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:32.473Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:32.474Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:32.474Z level=INFO source=sched_test.go:669 msg=scenario1a time=2025-10-04T05:52:32.474Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:32.474Z level=DEBUG source=sched.go:142 msg="pending request cancelled or timed out, skipping scheduling" time=2025-10-04T05:52:32.479Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:32.479Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" PASS ok github.com/ollama/ollama/server 0.954s github.com/ollama/ollama/server time=2025-10-04T05:52:33.773Z level=INFO source=logging.go:32 msg="ollama app started" time=2025-10-04T05:52:33.774Z level=DEBUG source=convert.go:232 msg="vocabulary is smaller than expected, padding with dummy tokens" expect=32000 actual=1 time=2025-10-04T05:52:33.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=general.file_type type=uint32 time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=general.quantization_version type=uint32 time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=llama.vocab_size type=uint32 time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.pre type=string time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:33.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:33.812Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.812Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.813Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.813Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.813Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.813Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.813Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:52:33.813Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.813Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:52:33.813Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.814Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.814Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.814Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:52:33.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.814Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:52:33.814Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.815Z level=ERROR source=images.go:157 msg="unknown capability" capability=unknown time=2025-10-04T05:52:33.816Z level=WARN source=manifest.go:160 msg="bad manifest name" path=host/namespace/model/.hidden time=2025-10-04T05:52:33.817Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:52:33.818Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:52:33.818Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:52:33.818Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=4 time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:33.818Z level=WARN source=quantization.go:145 msg="tensor cols 100 are not divisible by 32, required for Q8_0 - using fallback quantization F16" time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:52:33.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.818Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.818Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[512 2]" offset=0 time=2025-10-04T05:52:33.818Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:33.819Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=output.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=12 shape="[512 2]" offset=0 time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:627 msg=output.weight kind=14 shape="[256 4]" offset=576 time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape="[512 2]" offset=0 time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=4096 time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:33.819Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=blk.0.attn_v.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=14 shape="[512 2]" offset=0 time=2025-10-04T05:52:33.819Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=864 time=2025-10-04T05:52:33.819Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.820Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.820Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.820Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[32 16 2]" offset=0 time=2025-10-04T05:52:33.820Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:52:33.820Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.820Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:33.820Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:52:33.820Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.820Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.820Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:52:33.820Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:52:33.820Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=8 shape="[32 16 2]" offset=0 time=2025-10-04T05:52:33.820Z level=DEBUG source=gguf.go:627 msg=output.weight kind=8 shape="[256 4]" offset=1088 time=2025-10-04T05:52:33.820Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.822Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.823Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.824Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.826Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.827Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:33.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.828Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:33.828Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.828Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.828Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.828Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:52:33.830Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:33.830Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.830Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.830Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.830Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.833Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.833Z level=DEBUG source=create.go:98 msg="create model from model name" from=bob resp = api.ShowResponse{License:"", Modelfile:"# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM test:latest\n\nFROM \nTEMPLATE {{ .Prompt }}\n", Parameters:"", Template:"{{ .Prompt }}", System:"", Renderer:"", Parser:"", Details:api.ModelDetails{ParentModel:"", Format:"", Family:"gptoss", Families:[]string{"gptoss"}, ParameterSize:"20.9B", QuantizationLevel:"MXFP4"}, Messages:[]api.Message(nil), RemoteModel:"bob", RemoteHost:"https://ollama.com:11434", ModelInfo:map[string]interface {}{"general.architecture":"gptoss", "gptoss.context_length":131072, "gptoss.embedding_length":2880}, ProjectorInfo:map[string]interface {}(nil), Tensors:[]api.Tensor(nil), Capabilities:[]model.Capability{"completion", "tools", "thinking"}, ModifiedAt:time.Date(2025, time.October, 4, 5, 52, 33, 833869247, time.UTC)} time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.835Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.835Z level=DEBUG source=gguf.go:578 msg=tokenizer.chat_template type=string time=2025-10-04T05:52:33.835Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.835Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.835Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.835Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.835Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.835Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.835Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.844Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.844Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.845Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.845Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.846Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:33.847Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:33.847Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:33.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.848Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.848Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.848Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.848Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.848Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-04T05:52:33.849Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-10-04T05:52:33.849Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so* time=2025-10-04T05:52:33.849Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build3551517672/b001/libcuda.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-10-04T05:52:33.849Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:52:33.849Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcudart.so* time=2025-10-04T05:52:33.849Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build3551517672/b001/libcudart.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcudart.so* /tmp/go-build3551517672/b001/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-10-04T05:52:33.849Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:52:33.849Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-10-04T05:52:33.849Z level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-04T05:52:33.849Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:52:33.850Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.850Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.852Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.852Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.852Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.854Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.854Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.854Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.855Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.855Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.855Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.857Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.857Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.857Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.859Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.859Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.859Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.860Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.861Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.861Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.863Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.863Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.863Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.864Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.865Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.865Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.866Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.866Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.866Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.868Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.868Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.868Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.869Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.869Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.869Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1854578990/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.871Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:33.871Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:33.871Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.871Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:33.871Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:33.872Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.872Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.872Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.872Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.872Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.873Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.873Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.873Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.875Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.875Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.875Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.877Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.877Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.877Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.878Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.878Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.878Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.880Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.880Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.880Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.882Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.882Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.882Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.884Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.884Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.884Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.886Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.886Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.886Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.887Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.888Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.888Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.889Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.889Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.889Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3840753448/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.890Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:33.890Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.891Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:33.893Z level=DEBUG source=manifest.go:53 msg="layer does not exist" digest=sha256:776957f9c9239232f060e29d642d8f5ef3bb931f485c27a13ae6385515fb425c time=2025-10-04T05:52:33.893Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.893Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.893Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:33.893Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:33.893Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:33.894Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:33.894Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.894Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.895Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.895Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.895Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.896Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.896Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:52:33.896Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.896Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.896Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.896Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:52:33.896Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:52:33.896Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.896Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.896Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.897Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.897Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.897Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3655368645/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.898Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.899Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3655368645/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.900Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:33.900Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.900Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.901Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.902Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.902Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3655368645/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.903Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.903Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.903Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3655368645/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.905Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.905Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3655368645/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.907Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.907Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3655368645/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.909Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.909Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.909Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3655368645/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.942Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:33.942Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:33.942Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:33.942Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:33.942Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:33.942Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.942Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.942Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.942Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.942Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.943Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:52:33.943Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:52:33.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:52:33.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.945Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.945Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.945Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2119212815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.947Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.947Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.947Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2119212815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.948Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:33.948Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.948Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.950Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.950Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.950Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2119212815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.951Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.951Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.951Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2119212815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.953Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.953Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.953Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2119212815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.955Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:52:33.955Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.955Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.956Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.956Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.956Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2119212815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.958Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.958Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.958Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2119212815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.960Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.960Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.960Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2119212815/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.961Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:33.961Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:33.962Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.962Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:33.962Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:33.962Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.962Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.962Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:33.962Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:33.962Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:33.963Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.963Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.963Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag806412668/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.965Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.965Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.965Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag806412668/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.967Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.967Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.967Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag806412668/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.969Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.969Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.969Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag806412668/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:33.971Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:33.971Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:33.971Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag806412668/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:52:34.013Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.013Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.013Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.013Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.013Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.013Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.013Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.013Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.013Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.013Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.013Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.013Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.013Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.014Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.014Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.014Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.014Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.014Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.014Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.014Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.014Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.015Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.015Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.015Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimecontent_streams_as_it_arr889605319/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.106Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.107Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.107Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.107Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.107Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.108Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.108Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.108Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.108Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.108Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.108Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.108Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.109Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.109Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.109Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimethinking_streams_separate2095205626/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.230Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.230Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.230Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.230Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.230Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.230Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.230Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.230Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.230Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.231Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.231Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.231Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.231Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.231Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.231Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.231Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.231Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.231Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.232Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.232Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.232Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimepartial_tags_buffer_until93855015/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.384Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.384Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.384Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.384Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.384Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.384Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.384Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.384Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.384Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.384Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.384Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.384Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.385Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.385Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.385Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.385Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.385Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.385Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.385Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.385Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.385Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.386Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.386Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.386Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimesimple_assistant_after_an2094685295/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.417Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.417Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.417Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.417Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.417Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.418Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.418Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.418Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.418Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.418Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.418Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.418Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.419Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.419Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.419Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_parsed_and_retu989070668/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.449Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.449Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.450Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.450Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.450Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.451Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.451Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.451Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.451Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.451Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.452Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.452Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.453Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.453Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.453Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_with_streaming_2069662407/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.544Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.544Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.544Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.544Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.544Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.545Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.545Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.545Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.545Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.545Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.545Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.545Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.546Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.546Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.546Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingSimple2032070822/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.546Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.546Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.546Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.546Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.546Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.547Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.547Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.547Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.547Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.547Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.547Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.547Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.548Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.548Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.548Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingsimple_message_without_thinking334897565/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.549Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.549Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.549Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.549Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.549Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.550Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.550Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.550Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.550Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.550Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.550Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.550Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.550Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.551Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.551Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingmessage_with_analysis_channel_for1356246814/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.551Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:52:34.551Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:52:34.551Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:52:34.551Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.551Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:52:34.552Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.552Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.552Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:52:34.552Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:52:34.552Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.552Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.552Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.553Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.2 GiB" before.free_swap="139.0 GiB" now.total="7.6 GiB" now.free="1.2 GiB" now.free_swap="139.0 GiB" time=2025-10-04T05:52:34.553Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.553Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingstreaming_with_partial_tags_acros1542880371/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:52:34.553Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.553Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.554Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.554Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.555Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.556Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.557Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.558Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.559Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:52:34 | 200 | 22.209µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/04 - 05:52:34 | 200 | 47.791µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/04 - 05:52:34 | 200 | 81.002µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:52:34 | 200 | 123.699µs | 127.0.0.1 | GET "/api/tags" time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.562Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.563Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.563Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.563Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.563Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.563Z level=INFO source=images.go:518 msg="total blobs: 3" time=2025-10-04T05:52:34.563Z level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-04T05:52:34.563Z level=INFO source=server.go:164 msg=http status=200 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:41912 proto=HTTP/1.1 query="" time=2025-10-04T05:52:34.564Z level=WARN source=server.go:164 msg=http error="model not found" status=404 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:41912 proto=HTTP/1.1 query="" [GIN] 2025/10/04 - 05:52:34 | 200 | 412.096µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.565Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:52:34 | 200 | 413.112µs | 127.0.0.1 | POST "/api/create" time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.566Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:52:34 | 200 | 391.107µs | 127.0.0.1 | POST "/api/copy" time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.567Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:52:34 | 200 | 498.58µs | 127.0.0.1 | POST "/api/show" time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.568Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.569Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:52:34 | 200 | 703.438µs | 127.0.0.1 | GET "/v1/models/show-model" [GIN] 2025/10/04 - 05:52:34 | 405 | 1.142µs | 127.0.0.1 | GET "/api/show" time=2025-10-04T05:52:34.570Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.571Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.573Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.573Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.573Z level=DEBUG source=gguf.go:578 msg=general.type type=string time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.block_count default=0 time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.vision.block_count default=0 time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.block_count default=0 time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.vision.block_count default=0 time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.573Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:52:34.574Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.574Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.574Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:52:34.574Z level=INFO source=sched.go:417 msg="NewLlamaServer failed" model=foo error="something failed to load model blah: this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull `" time=2025-10-04T05:52:34.574Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:52:34.574Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:34.574Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:34.574Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open dummy_model_path: no such file or directory" time=2025-10-04T05:52:34.574Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:52:34.574Z level=ERROR source=sched.go:476 msg="error loading llama server" error="wait failure" time=2025-10-04T05:52:34.575Z level=DEBUG source=sched.go:478 msg="triggering expiration for failed load" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=dummy_model_path runner.num_ctx=4096 time=2025-10-04T05:52:34.576Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.576Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.576Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.576Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.576Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.576Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.576Z level=INFO source=sched_test.go:179 msg=a time=2025-10-04T05:52:34.577Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.578Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.578Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSameModelSameRequest1304547278/002/353307507 time=2025-10-04T05:52:34.578Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:34.578Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest1304547278/002/353307507 runner.num_ctx=4096 time=2025-10-04T05:52:34.578Z level=INFO source=sched_test.go:196 msg=b time=2025-10-04T05:52:34.578Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSameModelSameRequest1304547278/002/353307507 time=2025-10-04T05:52:34.578Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.578Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.578Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.578Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.578Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.578Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest1304547278/002/353307507 runner.num_ctx=4096 time=2025-10-04T05:52:34.578Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.579Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.579Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.579Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.579Z level=INFO source=sched_test.go:223 msg=a time=2025-10-04T05:52:34.579Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.579Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.579Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 time=2025-10-04T05:52:34.579Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:34.579Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 time=2025-10-04T05:52:34.579Z level=INFO source=sched_test.go:241 msg=b time=2025-10-04T05:52:34.579Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 time=2025-10-04T05:52:34.579Z level=DEBUG source=sched.go:154 msg=reloading runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 time=2025-10-04T05:52:34.579Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:52:34.579Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 time=2025-10-04T05:52:34.580Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.580Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 time=2025-10-04T05:52:34.581Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:34.581Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="20 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3710205922/002/1620041089 runner.num_ctx=4096 time=2025-10-04T05:52:34.581Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.581Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.581Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.581Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.581Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.581Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.581Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.581Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.581Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.581Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.582Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.582Z level=INFO source=sched_test.go:274 msg=a time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 time=2025-10-04T05:52:34.582Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 time=2025-10-04T05:52:34.582Z level=INFO source=sched_test.go:293 msg=b time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:52:34.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:52:34.582Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:52:34.582Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 time=2025-10-04T05:52:34.582Z level=INFO source=sched_test.go:311 msg=c time=2025-10-04T05:52:34.582Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=cpu available="24.2 GiB" time=2025-10-04T05:52:34.582Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=cpu total="29.8 GiB" available="19.6 GiB" time=2025-10-04T05:52:34.582Z level=INFO source=sched.go:470 msg="loaded runners" count=3 time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-4a runner.inference=cpu runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/006/1883779051 runner.num_ctx=4096 time=2025-10-04T05:52:34.582Z level=INFO source=sched_test.go:329 msg=d time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:52:34.582Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:162 msg="max runners achieved, unloading one to make room" runner_count=3 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 runner.num_ctx=4096 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/002/4138897082 time=2025-10-04T05:52:34.584Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:52:34.584Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="3.7 GiB" time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:52:34.584Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 runner.num_ctx=4096 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/004/2061849342 time=2025-10-04T05:52:34.590Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:52:34.590Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:52:34.590Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3c runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1355182745/008/1086197189 runner.num_ctx=4096 time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.590Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.591Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.591Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.591Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.591Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.591Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.591Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.591Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.591Z level=INFO source=sched_test.go:367 msg=a time=2025-10-04T05:52:34.591Z level=INFO source=sched_test.go:370 msg=b time=2025-10-04T05:52:34.591Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.591Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.591Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGetRunner3070499686/002/537247562 time=2025-10-04T05:52:34.591Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:34.592Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 runner.num_ctx=4096 time=2025-10-04T05:52:34.592Z level=INFO source=sched_test.go:394 msg=c time=2025-10-04T05:52:34.592Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open bad path: no such file or directory" time=2025-10-04T05:52:34.592Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.592Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 runner.num_ctx=4096 duration=2ms time=2025-10-04T05:52:34.592Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:34.594Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 runner.num_ctx=4096 time=2025-10-04T05:52:34.594Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 runner.num_ctx=4096 time=2025-10-04T05:52:34.594Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 runner.num_ctx=4096 time=2025-10-04T05:52:34.594Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 runner.num_ctx=4096 time=2025-10-04T05:52:34.594Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 runner.num_ctx=4096 time=2025-10-04T05:52:34.594Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 time=2025-10-04T05:52:34.594Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3070499686/002/537247562 time=2025-10-04T05:52:34.594Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.642Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:52:34.642Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:52:34.642Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:52:34.662Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.663Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.663Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.663Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.663Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.663Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.663Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.664Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestPrematureExpired2194170054/002/45223310 time=2025-10-04T05:52:34.664Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:52:34.664Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 runner.num_ctx=4096 time=2025-10-04T05:52:34.664Z level=INFO source=sched_test.go:481 msg="sending premature expired event now" time=2025-10-04T05:52:34.664Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 runner.num_ctx=4096 time=2025-10-04T05:52:34.664Z level=DEBUG source=sched.go:310 msg="expired event with positive ref count, retrying" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:52:34.669Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:52:34.669Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:52:34.669Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:52:34.674Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 runner.num_ctx=4096 time=2025-10-04T05:52:34.674Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 runner.num_ctx=4096 time=2025-10-04T05:52:34.674Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 runner.num_ctx=4096 time=2025-10-04T05:52:34.674Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 runner.num_ctx=4096 time=2025-10-04T05:52:34.674Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 time=2025-10-04T05:52:34.674Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2194170054/002/45223310 time=2025-10-04T05:52:34.674Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:52:34.700Z level=ERROR source=sched.go:272 msg="finished request signal received after model unloaded" modelPath=/tmp/TestPrematureExpired2194170054/002/45223310 time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=1 library=a available="900 B" time=2025-10-04T05:52:34.705Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=1 library=a total="1000 B" available="825 B" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=2 library=a available="1.9 KiB" time=2025-10-04T05:52:34.705Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=2 library=a total="2.0 KiB" available="1.8 KiB" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=a time=2025-10-04T05:52:34.705Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=b time=2025-10-04T05:52:34.706Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:52:34.706Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:52:34.706Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:52:34.706Z level=INFO source=sched_test.go:669 msg=scenario1a time=2025-10-04T05:52:34.706Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:52:34.706Z level=DEBUG source=sched.go:142 msg="pending request cancelled or timed out, skipping scheduling" PASS ok github.com/ollama/ollama/server 0.951s github.com/ollama/ollama/server/internal/cache/blob PASS ok github.com/ollama/ollama/server/internal/cache/blob 0.005s github.com/ollama/ollama/server/internal/cache/blob PASS ok github.com/ollama/ollama/server/internal/cache/blob 0.005s github.com/ollama/ollama/server/internal/client/ollama 2025/10/04 05:52:35 http: TLS handshake error from 127.0.0.1:58558: remote error: tls: bad certificate PASS ok github.com/ollama/ollama/server/internal/client/ollama 0.147s github.com/ollama/ollama/server/internal/client/ollama 2025/10/04 05:52:36 http: TLS handshake error from 127.0.0.1:34028: remote error: tls: bad certificate PASS ok github.com/ollama/ollama/server/internal/client/ollama 0.147s github.com/ollama/ollama/server/internal/internal/backoff ? github.com/ollama/ollama/server/internal/internal/backoff [no test files] github.com/ollama/ollama/server/internal/internal/names PASS ok github.com/ollama/ollama/server/internal/internal/names 0.002s github.com/ollama/ollama/server/internal/internal/names PASS ok github.com/ollama/ollama/server/internal/internal/names 0.002s github.com/ollama/ollama/server/internal/internal/stringsx PASS ok github.com/ollama/ollama/server/internal/internal/stringsx 0.004s github.com/ollama/ollama/server/internal/internal/stringsx PASS ok github.com/ollama/ollama/server/internal/internal/stringsx 0.003s github.com/ollama/ollama/server/internal/internal/syncs ? github.com/ollama/ollama/server/internal/internal/syncs [no test files] github.com/ollama/ollama/server/internal/manifest ? github.com/ollama/ollama/server/internal/manifest [no test files] github.com/ollama/ollama/server/internal/registry 2025/10/04 05:52:37 http: TLS handshake error from 127.0.0.1:49948: write tcp 127.0.0.1:44477->127.0.0.1:49948: use of closed network connection PASS ok github.com/ollama/ollama/server/internal/registry 0.009s github.com/ollama/ollama/server/internal/registry 2025/10/04 05:52:37 http: TLS handshake error from 127.0.0.1:42800: write tcp 127.0.0.1:44107->127.0.0.1:42800: use of closed network connection PASS ok github.com/ollama/ollama/server/internal/registry 0.010s github.com/ollama/ollama/server/internal/testutil ? github.com/ollama/ollama/server/internal/testutil [no test files] github.com/ollama/ollama/template PASS ok github.com/ollama/ollama/template 0.611s github.com/ollama/ollama/template PASS ok github.com/ollama/ollama/template 0.602s github.com/ollama/ollama/thinking PASS ok github.com/ollama/ollama/thinking 0.002s github.com/ollama/ollama/thinking PASS ok github.com/ollama/ollama/thinking 0.002s github.com/ollama/ollama/tools PASS ok github.com/ollama/ollama/tools 0.006s github.com/ollama/ollama/tools PASS ok github.com/ollama/ollama/tools 0.006s github.com/ollama/ollama/types/errtypes ? github.com/ollama/ollama/types/errtypes [no test files] github.com/ollama/ollama/types/model PASS ok github.com/ollama/ollama/types/model 0.004s github.com/ollama/ollama/types/model PASS ok github.com/ollama/ollama/types/model 0.004s github.com/ollama/ollama/types/syncmap ? github.com/ollama/ollama/types/syncmap [no test files] github.com/ollama/ollama/version ? github.com/ollama/ollama/version [no test files] + RPM_EC=0 ++ jobs -p + exit 0 Processing files: ollama-0.12.3-1.fc44.x86_64 Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.3qLFBZ + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + DOCDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export DOCDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/docs /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/CONTRIBUTING.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/README.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/SECURITY.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + RPM_EC=0 ++ jobs -p + exit 0 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.2QGCis + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/vendor/modules.txt /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + RPM_EC=0 ++ jobs -p + exit 0 warning: File listed twice: /usr/share/licenses/ollama Provides: bundled(golang(github.com/agnivade/levenshtein)) = 1.1.1 bundled(golang(github.com/apache/arrow/go/arrow)) = bc21918 bundled(golang(github.com/bytedance/sonic)) = 1.11.6 bundled(golang(github.com/bytedance/sonic/loader)) = 0.1.1 bundled(golang(github.com/chewxy/hm)) = 1.0.0 bundled(golang(github.com/chewxy/math32)) = 1.11.0 bundled(golang(github.com/cloudwego/base64x)) = 0.1.4 bundled(golang(github.com/cloudwego/iasm)) = 0.2.0 bundled(golang(github.com/containerd/console)) = 1.0.3 bundled(golang(github.com/d4l3k/go-bfloat16)) = 690c3bd bundled(golang(github.com/davecgh/go-spew)) = 1.1.1 bundled(golang(github.com/dlclark/regexp2)) = 1.11.4 bundled(golang(github.com/emirpasic/gods/v2)) = 2.0.0_alpha bundled(golang(github.com/gabriel-vasile/mimetype)) = 1.4.3 bundled(golang(github.com/gin-contrib/cors)) = 1.7.2 bundled(golang(github.com/gin-contrib/sse)) = 0.1.0 bundled(golang(github.com/gin-gonic/gin)) = 1.10.0 bundled(golang(github.com/go-playground/locales)) = 0.14.1 bundled(golang(github.com/go-playground/universal-translator)) = 0.18.1 bundled(golang(github.com/go-playground/validator/v10)) = 10.20.0 bundled(golang(github.com/goccy/go-json)) = 0.10.2 bundled(golang(github.com/gogo/protobuf)) = 1.3.2 bundled(golang(github.com/golang/protobuf)) = 1.5.4 bundled(golang(github.com/google/flatbuffers)) = 24.3.25+incompatible bundled(golang(github.com/google/go-cmp)) = 0.7.0 bundled(golang(github.com/google/uuid)) = 1.6.0 bundled(golang(github.com/inconshreveable/mousetrap)) = 1.1.0 bundled(golang(github.com/json-iterator/go)) = 1.1.12 bundled(golang(github.com/klauspost/cpuid/v2)) = 2.2.7 bundled(golang(github.com/kr/text)) = 0.2.0 bundled(golang(github.com/leodido/go-urn)) = 1.4.0 bundled(golang(github.com/mattn/go-isatty)) = 0.0.20 bundled(golang(github.com/mattn/go-runewidth)) = 0.0.14 bundled(golang(github.com/modern-go/concurrent)) = bacd9c7 bundled(golang(github.com/modern-go/reflect2)) = 1.0.2 bundled(golang(github.com/nlpodyssey/gopickle)) = 0.3.0 bundled(golang(github.com/olekukonko/tablewriter)) = 0.0.5 bundled(golang(github.com/pdevine/tensor)) = f88f456 bundled(golang(github.com/pelletier/go-toml/v2)) = 2.2.2 bundled(golang(github.com/pkg/errors)) = 0.9.1 bundled(golang(github.com/pmezard/go-difflib)) = 1.0.0 bundled(golang(github.com/rivo/uniseg)) = 0.2.0 bundled(golang(github.com/spf13/cobra)) = 1.7.0 bundled(golang(github.com/spf13/pflag)) = 1.0.5 bundled(golang(github.com/stretchr/testify)) = 1.9.0 bundled(golang(github.com/twitchyliquid64/golang-asm)) = 0.15.1 bundled(golang(github.com/ugorji/go/codec)) = 1.2.12 bundled(golang(github.com/x448/float16)) = 0.8.4 bundled(golang(github.com/xtgo/set)) = 1.0.0 bundled(golang(go4.org/unsafe/assume-no-moving-gc)) = b99613f bundled(golang(golang.org/x/arch)) = 0.8.0 bundled(golang(golang.org/x/crypto)) = 0.36.0 bundled(golang(golang.org/x/exp)) = aa4b98e bundled(golang(golang.org/x/image)) = 0.22.0 bundled(golang(golang.org/x/net)) = 0.38.0 bundled(golang(golang.org/x/sync)) = 0.12.0 bundled(golang(golang.org/x/sys)) = 0.31.0 bundled(golang(golang.org/x/term)) = 0.30.0 bundled(golang(golang.org/x/text)) = 0.23.0 bundled(golang(golang.org/x/tools)) = 0.30.0 bundled(golang(golang.org/x/xerrors)) = 5ec99f8 bundled(golang(gonum.org/v1/gonum)) = 0.15.0 bundled(golang(google.golang.org/protobuf)) = 1.34.1 bundled(golang(gopkg.in/yaml.v3)) = 3.0.1 bundled(golang(gorgonia.org/vecf32)) = 0.9.0 bundled(golang(gorgonia.org/vecf64)) = 0.9.0 bundled(llama-cpp) = b6121 config(ollama) = 0.12.3-1.fc44 group(ollama) group(ollama) = ZyBvbGxhbWEgLSAt ollama = 0.12.3-1.fc44 ollama(x86-64) = 0.12.3-1.fc44 user(ollama) = dSBvbGxhbWEgLSAiT2xsYW1hIiAvdmFyL2xpYi9vbGxhbWEgLQAA Requires(interp): /bin/sh /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(pre): /bin/sh group(ollama) user(ollama) Requires(post): /bin/sh Requires(preun): /bin/sh Requires(postun): /bin/sh group(ollama) user(ollama) Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libresolv.so.2()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.11)(64bit) libstdc++.so.6(CXXABI_1.3.13)(64bit) libstdc++.so.6(CXXABI_1.3.15)(64bit) libstdc++.so.6(CXXABI_1.3.2)(64bit) libstdc++.so.6(CXXABI_1.3.3)(64bit) libstdc++.so.6(CXXABI_1.3.5)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.14)(64bit) libstdc++.so.6(GLIBCXX_3.4.15)(64bit) libstdc++.so.6(GLIBCXX_3.4.17)(64bit) libstdc++.so.6(GLIBCXX_3.4.18)(64bit) libstdc++.so.6(GLIBCXX_3.4.19)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) libstdc++.so.6(GLIBCXX_3.4.22)(64bit) libstdc++.so.6(GLIBCXX_3.4.25)(64bit) libstdc++.so.6(GLIBCXX_3.4.26)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) libstdc++.so.6(GLIBCXX_3.4.32)(64bit) libstdc++.so.6(GLIBCXX_3.4.9)(64bit) rtld(GNU_HASH) Recommends: ollama-ggml Processing files: ollama-ggml-0.12.3-1.fc44.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.oHBMOE + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml = 0.12.3-1.fc44 ollama-ggml(x86-64) = 0.12.3-1.fc44 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) rtld(GNU_HASH) Processing files: ollama-ggml-cpu-0.12.3-1.fc44.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.G6Z73J + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml-cpu = 0.12.3-1.fc44 ollama-ggml-cpu(x86-64) = 0.12.3-1.fc44 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) rtld(GNU_HASH) Supplements: ollama-ggml(x86-64) Processing files: ollama-ggml-rocm-0.12.3-1.fc44.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.0dEsu9 + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-rocm + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-rocm + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-rocm + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml-rocm = 0.12.3-1.fc44 ollama-ggml-rocm(x86-64) = 0.12.3-1.fc44 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libamdhip64.so.7()(64bit) libamdhip64.so.7(hip_4.2)(64bit) libamdhip64.so.7(hip_6.0)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libhipblas.so.3()(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) librocblas.so.5()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) Supplements: if ollama-ggml(x86-64) rocm-hip(x86-64) Processing files: ollama-debugsource-0.12.3-1.fc44.x86_64 Provides: ollama-debugsource = 0.12.3-1.fc44 ollama-debugsource(x86-64) = 0.12.3-1.fc44 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Processing files: ollama-debuginfo-0.12.3-1.fc44.x86_64 Provides: debuginfo(build-id) = 4d7be77d95f8b86a41409cb9cb801bea2a14e06b ollama-debuginfo = 0.12.3-1.fc44 ollama-debuginfo(x86-64) = 0.12.3-1.fc44 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc44 Processing files: ollama-ggml-debuginfo-0.12.3-1.fc44.x86_64 Provides: debuginfo(build-id) = 5401f39824fcb2f0ff9878f59df40ce24c4ad3da libggml-base.so-0.12.3-1.fc44.x86_64.debug()(64bit) ollama-ggml-debuginfo = 0.12.3-1.fc44 ollama-ggml-debuginfo(x86-64) = 0.12.3-1.fc44 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc44 Processing files: ollama-ggml-cpu-debuginfo-0.12.3-1.fc44.x86_64 Provides: debuginfo(build-id) = 2ca18edc95f5d2385421ab249ad4b68b184bff5f debuginfo(build-id) = 504d1d8c501068eeda4c999a5d8129781f4321d5 debuginfo(build-id) = 57c7268617cc8dba15fbcc021bbb0670058dd87f debuginfo(build-id) = 65c1947622b85ac65264a7378e504a7f7dc3dff9 debuginfo(build-id) = c79e38c3fea3750defea6969295fe2f6be54830c debuginfo(build-id) = ca8f5a83399610a6ef6893a02226dbd2db02a0eb debuginfo(build-id) = ff91d12f52d19a7925bb1108b2d4680d64d8f740 libggml-cpu-alderlake.so-0.12.3-1.fc44.x86_64.debug()(64bit) libggml-cpu-haswell.so-0.12.3-1.fc44.x86_64.debug()(64bit) libggml-cpu-icelake.so-0.12.3-1.fc44.x86_64.debug()(64bit) libggml-cpu-sandybridge.so-0.12.3-1.fc44.x86_64.debug()(64bit) libggml-cpu-skylakex.so-0.12.3-1.fc44.x86_64.debug()(64bit) libggml-cpu-sse42.so-0.12.3-1.fc44.x86_64.debug()(64bit) libggml-cpu-x64.so-0.12.3-1.fc44.x86_64.debug()(64bit) ollama-ggml-cpu-debuginfo = 0.12.3-1.fc44 ollama-ggml-cpu-debuginfo(x86-64) = 0.12.3-1.fc44 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc44 Processing files: ollama-ggml-rocm-debuginfo-0.12.3-1.fc44.x86_64 Provides: debuginfo(build-id) = 071689f43d0279cf3b42c33278d947e85c6a1399 libggml-hip.so-0.12.3-1.fc44.x86_64.debug()(64bit) ollama-ggml-rocm-debuginfo = 0.12.3-1.fc44 ollama-ggml-rocm-debuginfo(x86-64) = 0.12.3-1.fc44 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc44 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc44.src.rpm Wrote: /builddir/build/RPMS/ollama-debugsource-0.12.3-1.fc44.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-0.12.3-1.fc44.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-cpu-0.12.3-1.fc44.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-debuginfo-0.12.3-1.fc44.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-rocm-debuginfo-0.12.3-1.fc44.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-0.12.3-1.fc44.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-cpu-debuginfo-0.12.3-1.fc44.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-debuginfo-0.12.3-1.fc44.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-rocm-0.12.3-1.fc44.x86_64.rpm RPM build warnings: File listed twice: /usr/share/licenses/ollama Finish: rpmbuild ollama-0.12.3-1.fc44.src.rpm Finish: build phase for ollama-0.12.3-1.fc44.src.rpm INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-rawhide-x86_64-1759552642.238252/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names INFO: Done(/var/lib/copr-rpmbuild/results/ollama-0.12.3-1.fc44.src.rpm) Config(child) 76 minutes 41 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot Finish: run Running RPMResults tool Package info: { "packages": [ { "name": "ollama-debugsource", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "x86_64" }, { "name": "ollama-ggml-rocm", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "x86_64" }, { "name": "ollama-ggml-cpu", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "x86_64" }, { "name": "ollama-ggml-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "x86_64" }, { "name": "ollama-ggml-cpu-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "x86_64" }, { "name": "ollama-ggml", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "x86_64" }, { "name": "ollama-ggml-rocm-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "x86_64" }, { "name": "ollama-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.3", "release": "1.fc44", "arch": "src" } ] } RPMResults finished