Warning: Permanently added '44.220.49.130' (ED25519) to the list of known hosts. You can reproduce this build on your computer by running: sudo dnf install copr-rpmbuild /usr/bin/copr-rpmbuild --verbose --drop-resultdir --task-url https://copr.fedorainfracloud.org/backend/get-build-task/9644096-fedora-43-x86_64 --chroot fedora-43-x86_64 Version: 1.6 PID: 8564 Logging PID: 8566 Task: {'allow_user_ssh': False, 'appstream': False, 'background': False, 'build_id': 9644096, 'buildroot_pkgs': [], 'chroot': 'fedora-43-x86_64', 'enable_net': False, 'fedora_review': False, 'git_hash': 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd', 'git_repo': 'https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama', 'isolation': 'default', 'memory_reqs': 2048, 'package_name': 'ollama', 'package_version': '0.12.3-1', 'project_dirname': 'ollama', 'project_name': 'ollama', 'project_owner': 'fachep', 'repo_priority': None, 'repos': [{'baseurl': 'https://download.copr.fedorainfracloud.org/results/fachep/ollama/fedora-43-x86_64/', 'id': 'copr_base', 'name': 'Copr repository', 'priority': None}, {'baseurl': 'https://developer.download.nvidia.cn/compute/cuda/repos/fedora42/x86_64/', 'id': 'https_developer_download_nvidia_cn_compute_cuda_repos_fedora42_x86_64', 'name': 'Additional repo https_developer_download_nvidia_cn_compute_cuda_repos_fedora42_x86_64'}, {'baseurl': 'https://developer.download.nvidia.cn/compute/cuda/repos/fedora41/x86_64/', 'id': 'https_developer_download_nvidia_cn_compute_cuda_repos_fedora41_x86_64', 'name': 'Additional repo https_developer_download_nvidia_cn_compute_cuda_repos_fedora41_x86_64'}], 'sandbox': 'fachep/ollama--fachep', 'source_json': {}, 'source_type': None, 'ssh_public_keys': None, 'storage': 0, 'submitter': 'fachep', 'tags': [], 'task_id': '9644096-fedora-43-x86_64', 'timeout': 18000, 'uses_devel_repo': False, 'with_opts': [], 'without_opts': []} Running: git clone https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama /var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama --depth 500 --no-single-branch --recursive cmd: ['git', 'clone', 'https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama', '/var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama', '--depth', '500', '--no-single-branch', '--recursive'] cwd: . rc: 0 stdout: stderr: Cloning into '/var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama'... Running: git checkout bd90d2d0f4106e3a74de46dced869f2b79bfddfd -- cmd: ['git', 'checkout', 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd', '--'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama rc: 0 stdout: stderr: Note: switching to 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at bd90d2d automatic import of ollama Running: dist-git-client sources cmd: ['dist-git-client', 'sources'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama rc: 0 stdout: stderr: INFO: Reading stdout from command: git rev-parse --abbrev-ref HEAD INFO: Reading stdout from command: git rev-parse HEAD INFO: Reading sources specification file: sources INFO: Downloading ollama-0.12.3.tar.gz INFO: Reading stdout from command: curl --help all INFO: Calling: curl -H Pragma: -o ollama-0.12.3.tar.gz --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/fachep/ollama/ollama/ollama-0.12.3.tar.gz/md5/f096acee5e82596e9afd4d07ed477de2/ollama-0.12.3.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.5M 100 10.5M 0 0 286M 0 --:--:-- --:--:-- --:--:-- 292M INFO: Reading stdout from command: md5sum ollama-0.12.3.tar.gz INFO: Downloading vendor.tar.bz2 INFO: Calling: curl -H Pragma: -o vendor.tar.bz2 --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/fachep/ollama/ollama/vendor.tar.bz2/md5/c608d605610ed47b385cf54a6f6b2a2c/vendor.tar.bz2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6402k 100 6402k 0 0 222M 0 --:--:-- --:--:-- --:--:-- 223M INFO: Reading stdout from command: md5sum vendor.tar.bz2 tail: /var/lib/copr-rpmbuild/main.log: file truncated Running (timeout=18000): unbuffer mock --spec /var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1759552642.548045 -r /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: mock.py version 6.3 starting (python version = 3.13.7, NVR = mock-6.3-1.fc42), args: /usr/libexec/mock/mock --spec /var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1759552642.548045 -r /var/lib/copr-rpmbuild/results/configs/child.cfg Start(bootstrap): init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish(bootstrap): init plugins Start: init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish: init plugins INFO: Signal handler active Start: run INFO: Start(/var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama/ollama.spec) Config(fedora-43-x86_64) Start: clean chroot Finish: clean chroot Mock Version: 6.3 INFO: Mock Version: 6.3 Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-43-x86_64-bootstrap-1759552642.548045/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata INFO: Guessed host environment type: unknown INFO: Using container image: registry.fedoraproject.org/fedora:43 INFO: Pulling image: registry.fedoraproject.org/fedora:43 INFO: Tagging container image as mock-bootstrap-b8ec74e1-7c14-4bb8-844d-7bfdb818c790 INFO: Checking that 7080cc9e49481c4075e8a326ad905c8334671085b1887aaac101bb530e3296c0 image matches host's architecture INFO: Copy content of container 7080cc9e49481c4075e8a326ad905c8334671085b1887aaac101bb530e3296c0 to /var/lib/mock/fedora-43-x86_64-bootstrap-1759552642.548045/root INFO: mounting 7080cc9e49481c4075e8a326ad905c8334671085b1887aaac101bb530e3296c0 with podman image mount INFO: image 7080cc9e49481c4075e8a326ad905c8334671085b1887aaac101bb530e3296c0 as /var/lib/containers/storage/overlay/95d96542144a5e75e65d2526631ac2a35ea76ad32178bdd6830a49880b53e7aa/merged INFO: umounting image 7080cc9e49481c4075e8a326ad905c8334671085b1887aaac101bb530e3296c0 (/var/lib/containers/storage/overlay/95d96542144a5e75e65d2526631ac2a35ea76ad32178bdd6830a49880b53e7aa/merged) with podman image umount INFO: Removing image mock-bootstrap-b8ec74e1-7c14-4bb8-844d-7bfdb818c790 INFO: Package manager dnf5 detected and used (fallback) INFO: Not updating bootstrap chroot, bootstrap_image_ready=True Start(bootstrap): creating root cache Finish(bootstrap): creating root cache Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-43-x86_64-1759552642.548045/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Package manager dnf5 detected and used (direct choice) INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-6.0.0-1.fc43.x86_64 rpm-sequoia-1.9.0-2.fc43.x86_64 dnf5-5.2.17.0-2.fc43.x86_64 dnf5-plugins-5.2.17.0-2.fc43.x86_64 Start: installing minimal buildroot with dnf5 Updating and loading repositories: Copr repository 100% | 4.7 KiB/s | 1.6 KiB | 00m00s Additional repo https_developer_downlo 100% | 88.8 KiB/s | 47.8 KiB | 00m01s Additional repo https_developer_downlo 100% | 181.0 KiB/s | 109.0 KiB | 00m01s updates 100% | 32.4 KiB/s | 33.3 KiB | 00m01s fedora 100% | 21.6 MiB/s | 35.7 MiB | 00m02s Repositories loaded. Package Arch Version Repository Size Installing group/module packages: bash x86_64 5.3.0-2.fc43 fedora 8.4 MiB bzip2 x86_64 1.0.8-21.fc43 fedora 95.3 KiB coreutils x86_64 9.7-6.fc43 fedora 5.4 MiB cpio x86_64 2.15-6.fc43 fedora 1.1 MiB diffutils x86_64 3.12-3.fc43 fedora 1.6 MiB fedora-release-common noarch 43-0.22 fedora 20.4 KiB findutils x86_64 1:4.10.0-6.fc43 fedora 1.8 MiB gawk x86_64 5.3.2-2.fc43 fedora 1.8 MiB glibc-minimal-langpack x86_64 2.42-4.fc43 fedora 0.0 B grep x86_64 3.12-2.fc43 fedora 1.0 MiB gzip x86_64 1.13-4.fc43 fedora 388.8 KiB info x86_64 7.2-6.fc43 fedora 353.9 KiB patch x86_64 2.8-2.fc43 fedora 222.8 KiB redhat-rpm-config noarch 343-11.fc43 fedora 182.9 KiB rpm-build x86_64 6.0.0-1.fc43 fedora 287.4 KiB sed x86_64 4.9-5.fc43 fedora 857.3 KiB shadow-utils x86_64 2:4.18.0-3.fc43 fedora 3.9 MiB tar x86_64 2:1.35-6.fc43 fedora 2.9 MiB unzip x86_64 6.0-67.fc43 fedora 386.3 KiB util-linux x86_64 2.41.1-17.fc43 fedora 3.5 MiB which x86_64 2.23-3.fc43 fedora 83.5 KiB xz x86_64 1:5.8.1-2.fc43 fedora 1.3 MiB Installing dependencies: add-determinism x86_64 0.6.0-2.fc43 fedora 2.4 MiB alternatives x86_64 1.33-2.fc43 fedora 62.2 KiB ansible-srpm-macros noarch 1-18.1.fc43 fedora 35.7 KiB audit-libs x86_64 4.1.1-2.fc43 fedora 378.8 KiB binutils x86_64 2.45-1.fc43 fedora 26.5 MiB build-reproducibility-srpm-macros noarch 0.6.0-2.fc43 fedora 735.0 B bzip2-libs x86_64 1.0.8-21.fc43 fedora 80.6 KiB ca-certificates noarch 2025.2.80_v9.0.304-1.1.fc43 fedora 2.7 MiB coreutils-common x86_64 9.7-6.fc43 fedora 11.3 MiB crypto-policies noarch 20250714-5.gitcd6043a.fc43 fedora 146.9 KiB curl x86_64 8.15.0-2.fc43 fedora 473.6 KiB cyrus-sasl-lib x86_64 2.1.28-33.fc43 fedora 2.3 MiB debugedit x86_64 5.2-3.fc43 fedora 214.0 KiB dwz x86_64 0.16-2.fc43 fedora 287.1 KiB ed x86_64 1.22.2-1.fc43 fedora 148.1 KiB efi-srpm-macros noarch 6-4.fc43 fedora 40.1 KiB elfutils x86_64 0.193-3.fc43 fedora 2.9 MiB elfutils-debuginfod-client x86_64 0.193-3.fc43 fedora 83.9 KiB elfutils-default-yama-scope noarch 0.193-3.fc43 fedora 1.8 KiB elfutils-libelf x86_64 0.193-3.fc43 fedora 1.2 MiB elfutils-libs x86_64 0.193-3.fc43 fedora 683.4 KiB fedora-gpg-keys noarch 43-0.4 fedora 131.2 KiB fedora-release noarch 43-0.22 fedora 0.0 B fedora-release-identity-basic noarch 43-0.22 fedora 658.0 B fedora-repos noarch 43-0.4 fedora 4.9 KiB file x86_64 5.46-8.fc43 fedora 100.2 KiB file-libs x86_64 5.46-8.fc43 fedora 11.9 MiB filesystem x86_64 3.18-50.fc43 fedora 112.0 B filesystem-srpm-macros noarch 3.18-50.fc43 fedora 38.2 KiB fonts-srpm-macros noarch 1:2.0.5-23.fc43 fedora 55.8 KiB forge-srpm-macros noarch 0.4.0-3.fc43 fedora 38.9 KiB fpc-srpm-macros noarch 1.3-15.fc43 fedora 144.0 B gap-srpm-macros noarch 1-1.fc43 fedora 2.0 KiB gdb-minimal x86_64 16.3-6.fc43 fedora 13.3 MiB gdbm-libs x86_64 1:1.23-10.fc43 fedora 129.9 KiB ghc-srpm-macros noarch 1.9.2-3.fc43 fedora 779.0 B glibc x86_64 2.42-4.fc43 fedora 6.7 MiB glibc-common x86_64 2.42-4.fc43 fedora 1.0 MiB glibc-gconv-extra x86_64 2.42-4.fc43 fedora 7.2 MiB gmp x86_64 1:6.3.0-4.fc43 fedora 811.2 KiB gnat-srpm-macros noarch 6-8.fc43 fedora 1.0 KiB gnulib-l10n noarch 20241231-1.fc43 fedora 655.0 KiB gnupg2 x86_64 2.4.8-4.fc43 fedora 6.5 MiB gnupg2-dirmngr x86_64 2.4.8-4.fc43 fedora 618.4 KiB gnupg2-gpg-agent x86_64 2.4.8-4.fc43 fedora 671.4 KiB gnupg2-gpgconf x86_64 2.4.8-4.fc43 fedora 250.0 KiB gnupg2-keyboxd x86_64 2.4.8-4.fc43 fedora 201.4 KiB gnupg2-verify x86_64 2.4.8-4.fc43 fedora 348.5 KiB gnutls x86_64 3.8.10-3.fc43 fedora 3.8 MiB go-srpm-macros noarch 3.8.0-1.fc43 fedora 61.9 KiB gpgverify noarch 2.2-3.fc43 fedora 8.7 KiB ima-evm-utils-libs x86_64 1.6.2-6.fc43 fedora 60.7 KiB jansson x86_64 2.14-3.fc43 fedora 89.1 KiB java-srpm-macros noarch 1-7.fc43 fedora 870.0 B json-c x86_64 0.18-7.fc43 fedora 82.7 KiB kernel-srpm-macros noarch 1.0-27.fc43 fedora 1.9 KiB keyutils-libs x86_64 1.6.3-6.fc43 fedora 54.3 KiB krb5-libs x86_64 1.21.3-7.fc43 fedora 2.3 MiB libacl x86_64 2.3.2-4.fc43 fedora 35.9 KiB libarchive x86_64 3.8.1-3.fc43 fedora 951.1 KiB libassuan x86_64 2.5.7-4.fc43 fedora 163.8 KiB libattr x86_64 2.5.2-6.fc43 fedora 24.4 KiB libblkid x86_64 2.41.1-17.fc43 fedora 262.4 KiB libbrotli x86_64 1.1.0-10.fc43 fedora 833.3 KiB libcap x86_64 2.76-3.fc43 fedora 209.1 KiB libcap-ng x86_64 0.8.5-8.fc43 fedora 68.9 KiB libcom_err x86_64 1.47.3-2.fc43 fedora 63.1 KiB libcurl x86_64 8.15.0-2.fc43 fedora 903.2 KiB libeconf x86_64 0.7.9-2.fc43 fedora 64.9 KiB libevent x86_64 2.1.12-16.fc43 fedora 883.1 KiB libfdisk x86_64 2.41.1-17.fc43 fedora 380.4 KiB libffi x86_64 3.5.1-2.fc43 fedora 83.6 KiB libfsverity x86_64 1.6-3.fc43 fedora 28.5 KiB libgcc x86_64 15.2.1-2.fc43 fedora 266.6 KiB libgcrypt x86_64 1.11.1-2.fc43 fedora 1.6 MiB libgomp x86_64 15.2.1-2.fc43 fedora 541.1 KiB libgpg-error x86_64 1.55-2.fc43 fedora 915.3 KiB libidn2 x86_64 2.3.8-2.fc43 fedora 552.5 KiB libksba x86_64 1.6.7-4.fc43 fedora 398.5 KiB liblastlog2 x86_64 2.41.1-17.fc43 fedora 33.9 KiB libmount x86_64 2.41.1-17.fc43 fedora 372.7 KiB libnghttp2 x86_64 1.66.0-2.fc43 fedora 162.2 KiB libpkgconf x86_64 2.3.0-3.fc43 fedora 78.1 KiB libpsl x86_64 0.21.5-6.fc43 fedora 76.4 KiB libselinux x86_64 3.9-5.fc43 fedora 193.1 KiB libsemanage x86_64 3.9-4.fc43 fedora 308.5 KiB libsepol x86_64 3.9-2.fc43 fedora 822.0 KiB libsmartcols x86_64 2.41.1-17.fc43 fedora 180.5 KiB libssh x86_64 0.11.3-1.fc43 fedora 567.1 KiB libssh-config noarch 0.11.3-1.fc43 fedora 277.0 B libstdc++ x86_64 15.2.1-2.fc43 fedora 2.8 MiB libtasn1 x86_64 4.20.0-2.fc43 fedora 176.3 KiB libtool-ltdl x86_64 2.5.4-7.fc43 fedora 70.1 KiB libunistring x86_64 1.1-10.fc43 fedora 1.7 MiB libusb1 x86_64 1.0.29-4.fc43 fedora 171.3 KiB libuuid x86_64 2.41.1-17.fc43 fedora 37.4 KiB libverto x86_64 0.3.2-11.fc43 fedora 25.4 KiB libxcrypt x86_64 4.4.38-8.fc43 fedora 284.5 KiB libxml2 x86_64 2.12.10-5.fc43 fedora 1.7 MiB libzstd x86_64 1.5.7-2.fc43 fedora 799.9 KiB lua-libs x86_64 5.4.8-2.fc43 fedora 280.8 KiB lua-srpm-macros noarch 1-16.fc43 fedora 1.3 KiB lz4-libs x86_64 1.10.0-3.fc43 fedora 161.4 KiB mpfr x86_64 4.2.2-2.fc43 fedora 832.8 KiB ncurses-base noarch 6.5-7.20250614.fc43 fedora 328.1 KiB ncurses-libs x86_64 6.5-7.20250614.fc43 fedora 946.3 KiB nettle x86_64 3.10.1-2.fc43 fedora 790.6 KiB npth x86_64 1.8-3.fc43 fedora 49.6 KiB ocaml-srpm-macros noarch 11-2.fc43 fedora 1.9 KiB openblas-srpm-macros noarch 2-20.fc43 fedora 112.0 B openldap x86_64 2.6.10-4.fc43 fedora 659.9 KiB openssl-libs x86_64 1:3.5.1-2.fc43 fedora 8.9 MiB p11-kit x86_64 0.25.8-1.fc43 fedora 2.3 MiB p11-kit-trust x86_64 0.25.8-1.fc43 fedora 446.5 KiB package-notes-srpm-macros noarch 0.5-14.fc43 fedora 1.6 KiB pam-libs x86_64 1.7.1-3.fc43 fedora 126.8 KiB pcre2 x86_64 10.46-1.fc43 fedora 697.7 KiB pcre2-syntax noarch 10.46-1.fc43 fedora 275.3 KiB perl-srpm-macros noarch 1-60.fc43 fedora 861.0 B pkgconf x86_64 2.3.0-3.fc43 fedora 88.5 KiB pkgconf-m4 noarch 2.3.0-3.fc43 fedora 14.4 KiB pkgconf-pkg-config x86_64 2.3.0-3.fc43 fedora 989.0 B popt x86_64 1.19-9.fc43 fedora 132.8 KiB publicsuffix-list-dafsa noarch 20250616-2.fc43 fedora 69.1 KiB pyproject-srpm-macros noarch 1.18.4-1.fc43 fedora 1.9 KiB python-srpm-macros noarch 3.14-5.fc43 fedora 51.5 KiB qt5-srpm-macros noarch 5.15.17-2.fc43 fedora 500.0 B qt6-srpm-macros noarch 6.9.2-1.fc43 fedora 464.0 B readline x86_64 8.3-2.fc43 fedora 511.7 KiB rpm x86_64 6.0.0-1.fc43 fedora 3.1 MiB rpm-build-libs x86_64 6.0.0-1.fc43 fedora 268.4 KiB rpm-libs x86_64 6.0.0-1.fc43 fedora 933.7 KiB rpm-sequoia x86_64 1.9.0-2.fc43 fedora 2.5 MiB rpm-sign-libs x86_64 6.0.0-1.fc43 fedora 39.7 KiB rust-srpm-macros noarch 26.4-1.fc43 fedora 4.8 KiB setup noarch 2.15.0-26.fc43 fedora 725.0 KiB sqlite-libs x86_64 3.50.2-2.fc43 fedora 1.5 MiB systemd-libs x86_64 258-1.fc43 fedora 2.3 MiB systemd-standalone-sysusers x86_64 258-1.fc43 fedora 293.5 KiB tpm2-tss x86_64 4.1.3-8.fc43 fedora 1.6 MiB tree-sitter-srpm-macros noarch 0.4.2-1.fc43 fedora 8.3 KiB util-linux-core x86_64 2.41.1-17.fc43 fedora 1.5 MiB xxhash-libs x86_64 0.8.3-3.fc43 fedora 90.2 KiB xz-libs x86_64 1:5.8.1-2.fc43 fedora 217.8 KiB zig-srpm-macros noarch 1-5.fc43 fedora 1.1 KiB zip x86_64 3.0-44.fc43 fedora 694.5 KiB zlib-ng-compat x86_64 2.2.5-2.fc43 fedora 137.6 KiB zstd x86_64 1.5.7-2.fc43 fedora 1.7 MiB Installing groups: Buildsystem building group Transaction Summary: Installing: 170 packages Total size of inbound packages is 59 MiB. Need to download 59 MiB. After this operation, 199 MiB extra will be used (install 199 MiB, remove 0 B). [ 1/170] bzip2-0:1.0.8-21.fc43.x86_64 100% | 273.1 KiB/s | 51.6 KiB | 00m00s [ 2/170] cpio-0:2.15-6.fc43.x86_64 100% | 2.0 MiB/s | 293.1 KiB | 00m00s [ 3/170] coreutils-0:9.7-6.fc43.x86_64 100% | 2.9 MiB/s | 1.1 MiB | 00m00s [ 4/170] bash-0:5.3.0-2.fc43.x86_64 100% | 4.5 MiB/s | 1.9 MiB | 00m00s [ 5/170] fedora-release-common-0:43-0. 100% | 556.1 KiB/s | 25.0 KiB | 00m00s [ 6/170] diffutils-0:3.12-3.fc43.x86_6 100% | 3.2 MiB/s | 392.3 KiB | 00m00s [ 7/170] findutils-1:4.10.0-6.fc43.x86 100% | 11.4 MiB/s | 550.0 KiB | 00m00s [ 8/170] glibc-minimal-langpack-0:2.42 100% | 933.3 KiB/s | 38.3 KiB | 00m00s [ 9/170] gzip-0:1.13-4.fc43.x86_64 100% | 3.7 MiB/s | 170.1 KiB | 00m00s [ 10/170] grep-0:3.12-2.fc43.x86_64 100% | 5.4 MiB/s | 299.1 KiB | 00m00s [ 11/170] info-0:7.2-6.fc43.x86_64 100% | 3.6 MiB/s | 182.9 KiB | 00m00s [ 12/170] patch-0:2.8-2.fc43.x86_64 100% | 2.3 MiB/s | 113.8 KiB | 00m00s [ 13/170] redhat-rpm-config-0:343-11.fc 100% | 1.2 MiB/s | 79.1 KiB | 00m00s [ 14/170] rpm-build-0:6.0.0-1.fc43.x86_ 100% | 2.8 MiB/s | 138.0 KiB | 00m00s [ 15/170] sed-0:4.9-5.fc43.x86_64 100% | 5.5 MiB/s | 317.1 KiB | 00m00s [ 16/170] unzip-0:6.0-67.fc43.x86_64 100% | 3.7 MiB/s | 183.7 KiB | 00m00s [ 17/170] which-0:2.23-3.fc43.x86_64 100% | 907.2 KiB/s | 41.7 KiB | 00m00s [ 18/170] tar-2:1.35-6.fc43.x86_64 100% | 6.2 MiB/s | 856.4 KiB | 00m00s [ 19/170] shadow-utils-2:4.18.0-3.fc43. 100% | 8.8 MiB/s | 1.3 MiB | 00m00s [ 20/170] xz-1:5.8.1-2.fc43.x86_64 100% | 6.5 MiB/s | 572.5 KiB | 00m00s [ 21/170] gawk-0:5.3.2-2.fc43.x86_64 100% | 12.5 MiB/s | 1.1 MiB | 00m00s [ 22/170] util-linux-0:2.41.1-17.fc43.x 100% | 7.5 MiB/s | 1.2 MiB | 00m00s [ 23/170] filesystem-0:3.18-50.fc43.x86 100% | 12.0 MiB/s | 1.3 MiB | 00m00s [ 24/170] glibc-0:2.42-4.fc43.x86_64 100% | 18.1 MiB/s | 2.2 MiB | 00m00s [ 25/170] ncurses-libs-0:6.5-7.20250614 100% | 6.5 MiB/s | 332.7 KiB | 00m00s [ 26/170] bzip2-libs-0:1.0.8-21.fc43.x8 100% | 861.3 KiB/s | 43.1 KiB | 00m00s [ 27/170] coreutils-common-0:9.7-6.fc43 100% | 31.8 MiB/s | 2.1 MiB | 00m00s [ 28/170] libacl-0:2.3.2-4.fc43.x86_64 100% | 449.6 KiB/s | 24.3 KiB | 00m00s [ 29/170] libcap-0:2.76-3.fc43.x86_64 100% | 896.2 KiB/s | 86.9 KiB | 00m00s [ 30/170] gmp-1:6.3.0-4.fc43.x86_64 100% | 1.5 MiB/s | 319.3 KiB | 00m00s [ 31/170] libattr-0:2.5.2-6.fc43.x86_64 100% | 107.5 KiB/s | 17.9 KiB | 00m00s [ 32/170] libselinux-0:3.9-5.fc43.x86_6 100% | 1.6 MiB/s | 97.7 KiB | 00m00s [ 33/170] systemd-libs-0:258-1.fc43.x86 100% | 13.3 MiB/s | 819.8 KiB | 00m00s [ 34/170] fedora-repos-0:43-0.4.noarch 100% | 206.5 KiB/s | 9.1 KiB | 00m00s [ 35/170] glibc-common-0:2.42-4.fc43.x8 100% | 5.4 MiB/s | 325.2 KiB | 00m00s [ 36/170] pcre2-0:10.46-1.fc43.x86_64 100% | 5.4 MiB/s | 262.2 KiB | 00m00s [ 37/170] ed-0:1.22.2-1.fc43.x86_64 100% | 1.6 MiB/s | 83.7 KiB | 00m00s [ 38/170] ansible-srpm-macros-0:1-18.1. 100% | 485.5 KiB/s | 19.9 KiB | 00m00s [ 39/170] build-reproducibility-srpm-ma 100% | 246.3 KiB/s | 11.8 KiB | 00m00s [ 40/170] dwz-0:0.16-2.fc43.x86_64 100% | 2.7 MiB/s | 135.5 KiB | 00m00s [ 41/170] efi-srpm-macros-0:6-4.fc43.no 100% | 361.2 KiB/s | 22.4 KiB | 00m00s [ 42/170] file-0:5.46-8.fc43.x86_64 100% | 887.6 KiB/s | 48.8 KiB | 00m00s [ 43/170] filesystem-srpm-macros-0:3.18 100% | 677.3 KiB/s | 26.4 KiB | 00m00s [ 44/170] fonts-srpm-macros-1:2.0.5-23. 100% | 503.2 KiB/s | 27.2 KiB | 00m00s [ 45/170] forge-srpm-macros-0:0.4.0-3.f 100% | 502.2 KiB/s | 20.1 KiB | 00m00s [ 46/170] fpc-srpm-macros-0:1.3-15.fc43 100% | 197.3 KiB/s | 7.9 KiB | 00m00s [ 47/170] gap-srpm-macros-0:1-1.fc43.no 100% | 175.5 KiB/s | 8.6 KiB | 00m00s [ 48/170] ghc-srpm-macros-0:1.9.2-3.fc4 100% | 150.8 KiB/s | 8.7 KiB | 00m00s [ 49/170] gnat-srpm-macros-0:6-8.fc43.n 100% | 143.8 KiB/s | 8.5 KiB | 00m00s [ 50/170] go-srpm-macros-0:3.8.0-1.fc43 100% | 602.3 KiB/s | 28.3 KiB | 00m00s [ 51/170] java-srpm-macros-0:1-7.fc43.n 100% | 184.7 KiB/s | 7.9 KiB | 00m00s [ 52/170] kernel-srpm-macros-0:1.0-27.f 100% | 159.3 KiB/s | 8.9 KiB | 00m00s [ 53/170] lua-srpm-macros-0:1-16.fc43.n 100% | 208.5 KiB/s | 8.8 KiB | 00m00s [ 54/170] ocaml-srpm-macros-0:11-2.fc43 100% | 162.5 KiB/s | 9.3 KiB | 00m00s [ 55/170] openblas-srpm-macros-0:2-20.f 100% | 172.6 KiB/s | 7.6 KiB | 00m00s [ 56/170] package-notes-srpm-macros-0:0 100% | 187.2 KiB/s | 9.0 KiB | 00m00s [ 57/170] perl-srpm-macros-0:1-60.fc43. 100% | 192.8 KiB/s | 8.3 KiB | 00m00s [ 58/170] pyproject-srpm-macros-0:1.18. 100% | 249.0 KiB/s | 13.7 KiB | 00m00s [ 59/170] python-srpm-macros-0:3.14-5.f 100% | 449.4 KiB/s | 23.4 KiB | 00m00s [ 60/170] qt5-srpm-macros-0:5.15.17-2.f 100% | 173.2 KiB/s | 8.7 KiB | 00m00s [ 61/170] qt6-srpm-macros-0:6.9.2-1.fc4 100% | 177.1 KiB/s | 9.4 KiB | 00m00s [ 62/170] rust-srpm-macros-0:26.4-1.fc4 100% | 264.7 KiB/s | 11.1 KiB | 00m00s [ 63/170] rpm-0:6.0.0-1.fc43.x86_64 100% | 8.9 MiB/s | 576.3 KiB | 00m00s [ 64/170] tree-sitter-srpm-macros-0:0.4 100% | 296.7 KiB/s | 13.4 KiB | 00m00s [ 65/170] zig-srpm-macros-0:1-5.fc43.no 100% | 168.7 KiB/s | 8.4 KiB | 00m00s [ 66/170] zip-0:3.0-44.fc43.x86_64 100% | 3.6 MiB/s | 261.6 KiB | 00m00s [ 67/170] debugedit-0:5.2-3.fc43.x86_64 100% | 1.0 MiB/s | 85.6 KiB | 00m00s [ 68/170] elfutils-0:0.193-3.fc43.x86_6 100% | 8.5 MiB/s | 571.3 KiB | 00m00s [ 69/170] elfutils-libelf-0:0.193-3.fc4 100% | 4.9 MiB/s | 207.8 KiB | 00m00s [ 70/170] libarchive-0:3.8.1-3.fc43.x86 100% | 8.1 MiB/s | 421.1 KiB | 00m00s [ 71/170] libgcc-0:15.2.1-2.fc43.x86_64 100% | 3.0 MiB/s | 133.0 KiB | 00m00s [ 72/170] libstdc++-0:15.2.1-2.fc43.x86 100% | 15.8 MiB/s | 920.1 KiB | 00m00s [ 73/170] popt-0:1.19-9.fc43.x86_64 100% | 1.3 MiB/s | 65.7 KiB | 00m00s [ 74/170] readline-0:8.3-2.fc43.x86_64 100% | 3.2 MiB/s | 224.6 KiB | 00m00s [ 75/170] rpm-build-libs-0:6.0.0-1.fc43 100% | 2.0 MiB/s | 127.9 KiB | 00m00s [ 76/170] rpm-libs-0:6.0.0-1.fc43.x86_6 100% | 7.7 MiB/s | 400.2 KiB | 00m00s [ 77/170] audit-libs-0:4.1.1-2.fc43.x86 100% | 2.7 MiB/s | 138.5 KiB | 00m00s [ 78/170] zstd-0:1.5.7-2.fc43.x86_64 100% | 6.3 MiB/s | 485.9 KiB | 00m00s [ 79/170] libeconf-0:0.7.9-2.fc43.x86_6 100% | 503.0 KiB/s | 35.2 KiB | 00m00s [ 80/170] libsemanage-0:3.9-4.fc43.x86_ 100% | 3.1 MiB/s | 123.5 KiB | 00m00s [ 81/170] libxcrypt-0:4.4.38-8.fc43.x86 100% | 3.0 MiB/s | 127.0 KiB | 00m00s [ 82/170] pam-libs-0:1.7.1-3.fc43.x86_6 100% | 1.1 MiB/s | 57.5 KiB | 00m00s [ 83/170] setup-0:2.15.0-26.fc43.noarch 100% | 3.0 MiB/s | 157.3 KiB | 00m00s [ 84/170] xz-libs-1:5.8.1-2.fc43.x86_64 100% | 2.6 MiB/s | 112.9 KiB | 00m00s [ 85/170] mpfr-0:4.2.2-2.fc43.x86_64 100% | 7.2 MiB/s | 347.0 KiB | 00m00s [ 86/170] libblkid-0:2.41.1-17.fc43.x86 100% | 2.5 MiB/s | 123.1 KiB | 00m00s [ 87/170] libcap-ng-0:0.8.5-8.fc43.x86_ 100% | 655.8 KiB/s | 32.1 KiB | 00m00s [ 88/170] libfdisk-0:2.41.1-17.fc43.x86 100% | 4.1 MiB/s | 161.3 KiB | 00m00s [ 89/170] liblastlog2-0:2.41.1-17.fc43. 100% | 539.6 KiB/s | 23.2 KiB | 00m00s [ 90/170] libmount-0:2.41.1-17.fc43.x86 100% | 3.8 MiB/s | 162.5 KiB | 00m00s [ 91/170] libsmartcols-0:2.41.1-17.fc43 100% | 2.0 MiB/s | 84.0 KiB | 00m00s [ 92/170] libuuid-0:2.41.1-17.fc43.x86_ 100% | 672.5 KiB/s | 26.2 KiB | 00m00s [ 93/170] util-linux-core-0:2.41.1-17.f 100% | 11.2 MiB/s | 550.9 KiB | 00m00s [ 94/170] zlib-ng-compat-0:2.2.5-2.fc43 100% | 1.5 MiB/s | 79.2 KiB | 00m00s [ 95/170] ncurses-base-0:6.5-7.20250614 100% | 1.5 MiB/s | 88.2 KiB | 00m00s [ 96/170] glibc-gconv-extra-0:2.42-4.fc 100% | 15.0 MiB/s | 1.6 MiB | 00m00s [ 97/170] gnulib-l10n-0:20241231-1.fc43 100% | 2.2 MiB/s | 150.2 KiB | 00m00s [ 98/170] libsepol-0:3.9-2.fc43.x86_64 100% | 5.7 MiB/s | 345.4 KiB | 00m00s [ 99/170] pcre2-syntax-0:10.46-1.fc43.n 100% | 3.0 MiB/s | 162.2 KiB | 00m00s [100/170] fedora-gpg-keys-0:43-0.4.noar 100% | 1.8 MiB/s | 138.8 KiB | 00m00s [101/170] add-determinism-0:0.6.0-2.fc4 100% | 12.0 MiB/s | 919.3 KiB | 00m00s [102/170] curl-0:8.15.0-2.fc43.x86_64 100% | 3.8 MiB/s | 233.7 KiB | 00m00s [103/170] file-libs-0:5.46-8.fc43.x86_6 100% | 8.1 MiB/s | 850.3 KiB | 00m00s [104/170] elfutils-libs-0:0.193-3.fc43. 100% | 3.9 MiB/s | 269.7 KiB | 00m00s [105/170] elfutils-debuginfod-client-0: 100% | 1.1 MiB/s | 46.8 KiB | 00m00s [106/170] libzstd-0:1.5.7-2.fc43.x86_64 100% | 7.1 MiB/s | 314.6 KiB | 00m00s [107/170] libxml2-0:2.12.10-5.fc43.x86_ 100% | 14.1 MiB/s | 692.7 KiB | 00m00s [108/170] lz4-libs-0:1.10.0-3.fc43.x86_ 100% | 1.6 MiB/s | 78.0 KiB | 00m00s [109/170] libgomp-0:15.2.1-2.fc43.x86_6 100% | 8.5 MiB/s | 372.9 KiB | 00m00s [110/170] lua-libs-0:5.4.8-2.fc43.x86_6 100% | 2.8 MiB/s | 131.7 KiB | 00m00s [111/170] rpm-sign-libs-0:6.0.0-1.fc43. 100% | 470.6 KiB/s | 28.2 KiB | 00m00s [112/170] rpm-sequoia-0:1.9.0-2.fc43.x8 100% | 16.6 MiB/s | 933.3 KiB | 00m00s [113/170] sqlite-libs-0:3.50.2-2.fc43.x 100% | 14.9 MiB/s | 760.5 KiB | 00m00s [114/170] elfutils-default-yama-scope-0 100% | 282.4 KiB/s | 12.4 KiB | 00m00s [115/170] json-c-0:0.18-7.fc43.x86_64 100% | 1.0 MiB/s | 45.0 KiB | 00m00s [116/170] gnupg2-0:2.4.8-4.fc43.x86_64 100% | 24.5 MiB/s | 1.6 MiB | 00m00s [117/170] ima-evm-utils-libs-0:1.6.2-6. 100% | 505.6 KiB/s | 29.3 KiB | 00m00s [118/170] libfsverity-0:1.6-3.fc43.x86_ 100% | 286.6 KiB/s | 18.6 KiB | 00m00s [119/170] gpgverify-0:2.2-3.fc43.noarch 100% | 213.5 KiB/s | 11.1 KiB | 00m00s [120/170] gnupg2-dirmngr-0:2.4.8-4.fc43 100% | 4.1 MiB/s | 274.6 KiB | 00m00s [121/170] openssl-libs-1:3.5.1-2.fc43.x 100% | 26.4 MiB/s | 2.6 MiB | 00m00s [122/170] gnupg2-gpg-agent-0:2.4.8-4.fc 100% | 6.5 MiB/s | 272.9 KiB | 00m00s [123/170] gnupg2-gpgconf-0:2.4.8-4.fc43 100% | 2.6 MiB/s | 115.0 KiB | 00m00s [124/170] gnupg2-verify-0:2.4.8-4.fc43. 100% | 3.9 MiB/s | 171.2 KiB | 00m00s [125/170] gnupg2-keyboxd-0:2.4.8-4.fc43 100% | 1.7 MiB/s | 94.7 KiB | 00m00s [126/170] libassuan-0:2.5.7-4.fc43.x86_ 100% | 1.3 MiB/s | 67.4 KiB | 00m00s [127/170] libgpg-error-0:1.55-2.fc43.x8 100% | 6.1 MiB/s | 244.3 KiB | 00m00s [128/170] libgcrypt-0:1.11.1-2.fc43.x86 100% | 9.5 MiB/s | 595.8 KiB | 00m00s [129/170] npth-0:1.8-3.fc43.x86_64 100% | 534.5 KiB/s | 25.7 KiB | 00m00s [130/170] tpm2-tss-0:4.1.3-8.fc43.x86_6 100% | 7.3 MiB/s | 425.9 KiB | 00m00s [131/170] crypto-policies-0:20250714-5. 100% | 1.7 MiB/s | 98.5 KiB | 00m00s [132/170] ca-certificates-0:2025.2.80_v 100% | 9.7 MiB/s | 975.4 KiB | 00m00s [133/170] gnutls-0:3.8.10-3.fc43.x86_64 100% | 17.5 MiB/s | 1.4 MiB | 00m00s [134/170] libksba-0:1.6.7-4.fc43.x86_64 100% | 2.8 MiB/s | 160.4 KiB | 00m00s [135/170] openldap-0:2.6.10-4.fc43.x86_ 100% | 4.3 MiB/s | 259.6 KiB | 00m00s [136/170] libusb1-0:1.0.29-4.fc43.x86_6 100% | 1.6 MiB/s | 79.9 KiB | 00m00s [137/170] libidn2-0:2.3.8-2.fc43.x86_64 100% | 3.5 MiB/s | 174.9 KiB | 00m00s [138/170] libtasn1-0:4.20.0-2.fc43.x86_ 100% | 1.8 MiB/s | 74.5 KiB | 00m00s [139/170] libunistring-0:1.1-10.fc43.x8 100% | 11.0 MiB/s | 542.9 KiB | 00m00s [140/170] nettle-0:3.10.1-2.fc43.x86_64 100% | 8.3 MiB/s | 424.2 KiB | 00m00s [141/170] p11-kit-0:0.25.8-1.fc43.x86_6 100% | 8.2 MiB/s | 503.8 KiB | 00m00s [142/170] cyrus-sasl-lib-0:2.1.28-33.fc 100% | 13.0 MiB/s | 787.9 KiB | 00m00s [143/170] libevent-0:2.1.12-16.fc43.x86 100% | 3.7 MiB/s | 257.8 KiB | 00m00s [144/170] libtool-ltdl-0:2.5.4-7.fc43.x 100% | 635.7 KiB/s | 36.2 KiB | 00m00s [145/170] libffi-0:3.5.1-2.fc43.x86_64 100% | 951.7 KiB/s | 40.9 KiB | 00m00s [146/170] gdbm-libs-1:1.23-10.fc43.x86_ 100% | 1.3 MiB/s | 56.8 KiB | 00m00s [147/170] alternatives-0:1.33-2.fc43.x8 100% | 753.3 KiB/s | 40.7 KiB | 00m00s [148/170] jansson-0:2.14-3.fc43.x86_64 100% | 532.7 KiB/s | 45.3 KiB | 00m00s [149/170] pkgconf-pkg-config-0:2.3.0-3. 100% | 106.8 KiB/s | 9.6 KiB | 00m00s [150/170] pkgconf-0:2.3.0-3.fc43.x86_64 100% | 873.9 KiB/s | 44.6 KiB | 00m00s [151/170] binutils-0:2.45-1.fc43.x86_64 100% | 31.9 MiB/s | 5.9 MiB | 00m00s [152/170] pkgconf-m4-0:2.3.0-3.fc43.noa 100% | 296.0 KiB/s | 13.9 KiB | 00m00s [153/170] libpkgconf-0:2.3.0-3.fc43.x86 100% | 881.3 KiB/s | 37.9 KiB | 00m00s [154/170] p11-kit-trust-0:0.25.8-1.fc43 100% | 2.8 MiB/s | 139.6 KiB | 00m00s [155/170] fedora-release-0:43-0.22.noar 100% | 279.5 KiB/s | 14.0 KiB | 00m00s [156/170] systemd-standalone-sysusers-0 100% | 2.7 MiB/s | 143.8 KiB | 00m00s [157/170] gdb-minimal-0:16.3-6.fc43.x86 100% | 55.1 MiB/s | 4.4 MiB | 00m00s [158/170] xxhash-libs-0:0.8.3-3.fc43.x8 100% | 566.0 KiB/s | 38.5 KiB | 00m00s [159/170] fedora-release-identity-basic 100% | 226.9 KiB/s | 14.7 KiB | 00m00s [160/170] libcurl-0:8.15.0-2.fc43.x86_6 100% | 6.8 MiB/s | 404.3 KiB | 00m00s [161/170] krb5-libs-0:1.21.3-7.fc43.x86 100% | 8.9 MiB/s | 758.9 KiB | 00m00s [162/170] libnghttp2-0:1.66.0-2.fc43.x8 100% | 1.4 MiB/s | 72.5 KiB | 00m00s [163/170] libbrotli-0:1.1.0-10.fc43.x86 100% | 3.2 MiB/s | 339.1 KiB | 00m00s [164/170] libpsl-0:0.21.5-6.fc43.x86_64 100% | 1.4 MiB/s | 65.0 KiB | 00m00s [165/170] libssh-0:0.11.3-1.fc43.x86_64 100% | 5.1 MiB/s | 232.8 KiB | 00m00s [166/170] keyutils-libs-0:1.6.3-6.fc43. 100% | 602.9 KiB/s | 31.4 KiB | 00m00s [167/170] libcom_err-0:1.47.3-2.fc43.x8 100% | 446.5 KiB/s | 26.8 KiB | 00m00s [168/170] libverto-0:0.3.2-11.fc43.x86_ 100% | 439.9 KiB/s | 20.7 KiB | 00m00s [169/170] publicsuffix-list-dafsa-0:202 100% | 1.7 MiB/s | 59.2 KiB | 00m00s [170/170] libssh-config-0:0.11.3-1.fc43 100% | 202.5 KiB/s | 9.1 KiB | 00m00s -------------------------------------------------------------------------------- [170/170] Total 100% | 15.6 MiB/s | 59.1 MiB | 00m04s Running transaction Importing OpenPGP key 0x31645531: UserID : "Fedora (43) " Fingerprint: C6E7F081CF80E13146676E88829B606631645531 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-43-primary The key was successfully imported. [ 1/172] Verify package files 100% | 735.0 B/s | 170.0 B | 00m00s [ 2/172] Prepare transaction 100% | 3.9 KiB/s | 170.0 B | 00m00s [ 3/172] Installing libgcc-0:15.2.1-2. 100% | 262.0 MiB/s | 268.3 KiB | 00m00s [ 4/172] Installing libssh-config-0:0. 100% | 0.0 B/s | 816.0 B | 00m00s [ 5/172] Installing publicsuffix-list- 100% | 0.0 B/s | 69.8 KiB | 00m00s [ 6/172] Installing fedora-release-ide 100% | 0.0 B/s | 916.0 B | 00m00s [ 7/172] Installing fedora-gpg-keys-0: 100% | 43.7 MiB/s | 179.0 KiB | 00m00s [ 8/172] Installing fedora-repos-0:43- 100% | 0.0 B/s | 5.7 KiB | 00m00s [ 9/172] Installing fedora-release-com 100% | 24.2 MiB/s | 24.7 KiB | 00m00s [ 10/172] Installing fedora-release-0:4 100% | 17.3 KiB/s | 124.0 B | 00m00s >>> Running sysusers scriptlet: setup-0:2.15.0-26.fc43.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-26.fc43.noarch >>> Scriptlet output: >>> Creating group 'adm' with GID 4. >>> Creating group 'audio' with GID 63. >>> Creating group 'cdrom' with GID 11. >>> Creating group 'clock' with GID 103. >>> Creating group 'dialout' with GID 18. >>> Creating group 'disk' with GID 6. >>> Creating group 'floppy' with GID 19. >>> Creating group 'ftp' with GID 50. >>> Creating group 'games' with GID 20. >>> Creating group 'input' with GID 104. >>> Creating group 'kmem' with GID 9. >>> Creating group 'kvm' with GID 36. >>> Creating group 'lock' with GID 54. >>> Creating group 'lp' with GID 7. >>> Creating group 'mail' with GID 12. >>> Creating group 'man' with GID 15. >>> Creating group 'mem' with GID 8. >>> Creating group 'nobody' with GID 65534. >>> Creating group 'render' with GID 105. >>> Creating group 'root' with GID 0. >>> Creating group 'sgx' with GID 106. >>> Creating group 'sys' with GID 3. >>> Creating group 'tape' with GID 33. >>> Creating group 'tty' with GID 5. >>> Creating group 'users' with GID 100. >>> Creating group 'utmp' with GID 22. >>> Creating group 'video' with GID 39. >>> Creating group 'wheel' with GID 10. >>> Creating user 'adm' (adm) with UID 3 and GID 4. >>> Creating group 'bin' with GID 1. >>> Creating user 'bin' (bin) with UID 1 and GID 1. >>> Creating group 'daemon' with GID 2. >>> Creating user 'daemon' (daemon) with UID 2 and GID 2. >>> Creating user 'ftp' (FTP User) with UID 14 and GID 50. >>> Creating user 'games' (games) with UID 12 and GID 100. >>> Creating user 'halt' (halt) with UID 7 and GID 0. >>> Creating user 'lp' (lp) with UID 4 and GID 7. >>> Creating user 'mail' (mail) with UID 8 and GID 12. >>> Creating user 'nobody' (Kernel Overflow User) with UID 65534 and GID 65534. >>> Creating user 'operator' (operator) with UID 11 and GID 0. >>> Creating user 'root' (Super User) with UID 0 and GID 0. >>> Creating user 'shutdown' (shutdown) with UID 6 and GID 0. >>> Creating user 'sync' (sync) with UID 5 and GID 0. >>> [ 11/172] Installing setup-0:2.15.0-26. 100% | 51.0 MiB/s | 730.6 KiB | 00m00s >>> [RPM] /etc/hosts created as /etc/hosts.rpmnew [ 12/172] Installing filesystem-0:3.18- 100% | 2.8 MiB/s | 212.8 KiB | 00m00s [ 13/172] Installing pkgconf-m4-0:2.3.0 100% | 0.0 B/s | 14.8 KiB | 00m00s [ 14/172] Installing pcre2-syntax-0:10. 100% | 271.2 MiB/s | 277.8 KiB | 00m00s [ 15/172] Installing gnulib-l10n-0:2024 100% | 215.5 MiB/s | 661.9 KiB | 00m00s [ 16/172] Installing coreutils-common-0 100% | 389.4 MiB/s | 11.3 MiB | 00m00s [ 17/172] Installing ncurses-base-0:6.5 100% | 86.3 MiB/s | 353.5 KiB | 00m00s [ 18/172] Installing bash-0:5.3.0-2.fc4 100% | 271.9 MiB/s | 8.4 MiB | 00m00s [ 19/172] Installing glibc-common-0:2.4 100% | 63.8 MiB/s | 1.0 MiB | 00m00s [ 20/172] Installing glibc-gconv-extra- 100% | 281.1 MiB/s | 7.3 MiB | 00m00s [ 21/172] Installing glibc-0:2.42-4.fc4 100% | 176.4 MiB/s | 6.7 MiB | 00m00s [ 22/172] Installing ncurses-libs-0:6.5 100% | 232.6 MiB/s | 952.8 KiB | 00m00s [ 23/172] Installing glibc-minimal-lang 100% | 0.0 B/s | 124.0 B | 00m00s [ 24/172] Installing zlib-ng-compat-0:2 100% | 135.2 MiB/s | 138.4 KiB | 00m00s [ 25/172] Installing bzip2-libs-0:1.0.8 100% | 79.8 MiB/s | 81.7 KiB | 00m00s [ 26/172] Installing libgpg-error-0:1.5 100% | 60.0 MiB/s | 921.1 KiB | 00m00s [ 27/172] Installing libstdc++-0:15.2.1 100% | 355.5 MiB/s | 2.8 MiB | 00m00s [ 28/172] Installing xz-libs-1:5.8.1-2. 100% | 213.8 MiB/s | 218.9 KiB | 00m00s [ 29/172] Installing libassuan-0:2.5.7- 100% | 161.7 MiB/s | 165.6 KiB | 00m00s [ 30/172] Installing libgcrypt-0:1.11.1 100% | 393.8 MiB/s | 1.6 MiB | 00m00s [ 31/172] Installing readline-0:8.3-2.f 100% | 250.9 MiB/s | 513.9 KiB | 00m00s [ 32/172] Installing gmp-1:6.3.0-4.fc43 100% | 397.2 MiB/s | 813.5 KiB | 00m00s [ 33/172] Installing libuuid-0:2.41.1-1 100% | 0.0 B/s | 38.5 KiB | 00m00s [ 34/172] Installing popt-0:1.19-9.fc43 100% | 68.1 MiB/s | 139.4 KiB | 00m00s [ 35/172] Installing npth-0:1.8-3.fc43. 100% | 0.0 B/s | 50.7 KiB | 00m00s [ 36/172] Installing libblkid-0:2.41.1- 100% | 257.2 MiB/s | 263.4 KiB | 00m00s [ 37/172] Installing libxcrypt-0:4.4.38 100% | 280.4 MiB/s | 287.2 KiB | 00m00s [ 38/172] Installing libzstd-0:1.5.7-2. 100% | 391.2 MiB/s | 801.1 KiB | 00m00s [ 39/172] Installing elfutils-libelf-0: 100% | 388.8 MiB/s | 1.2 MiB | 00m00s [ 40/172] Installing sqlite-libs-0:3.50 100% | 379.1 MiB/s | 1.5 MiB | 00m00s [ 41/172] Installing gnupg2-gpgconf-0:2 100% | 18.9 MiB/s | 252.0 KiB | 00m00s [ 42/172] Installing libattr-0:2.5.2-6. 100% | 0.0 B/s | 25.4 KiB | 00m00s [ 43/172] Installing libacl-0:2.3.2-4.f 100% | 0.0 B/s | 36.8 KiB | 00m00s [ 44/172] Installing libtasn1-0:4.20.0- 100% | 173.9 MiB/s | 178.1 KiB | 00m00s [ 45/172] Installing libunistring-0:1.1 100% | 345.3 MiB/s | 1.7 MiB | 00m00s [ 46/172] Installing libidn2-0:2.3.8-2. 100% | 60.6 MiB/s | 558.7 KiB | 00m00s [ 47/172] Installing crypto-policies-0: 100% | 33.6 MiB/s | 172.0 KiB | 00m00s [ 48/172] Installing dwz-0:0.16-2.fc43. 100% | 20.1 MiB/s | 288.5 KiB | 00m00s [ 49/172] Installing gnupg2-verify-0:2. 100% | 26.3 MiB/s | 349.9 KiB | 00m00s [ 50/172] Installing mpfr-0:4.2.2-2.fc4 100% | 271.6 MiB/s | 834.4 KiB | 00m00s [ 51/172] Installing gawk-0:5.3.2-2.fc4 100% | 100.9 MiB/s | 1.8 MiB | 00m00s [ 52/172] Installing libksba-0:1.6.7-4. 100% | 195.8 MiB/s | 401.1 KiB | 00m00s [ 53/172] Installing unzip-0:6.0-67.fc4 100% | 27.2 MiB/s | 389.8 KiB | 00m00s [ 54/172] Installing file-libs-0:5.46-8 100% | 624.1 MiB/s | 11.9 MiB | 00m00s [ 55/172] Installing file-0:5.46-8.fc43 100% | 7.6 MiB/s | 101.7 KiB | 00m00s [ 56/172] Installing pcre2-0:10.46-1.fc 100% | 341.4 MiB/s | 699.1 KiB | 00m00s [ 57/172] Installing grep-0:3.12-2.fc43 100% | 62.7 MiB/s | 1.0 MiB | 00m00s [ 58/172] Installing xz-1:5.8.1-2.fc43. 100% | 74.0 MiB/s | 1.3 MiB | 00m00s [ 59/172] Installing libeconf-0:0.7.9-2 100% | 65.0 MiB/s | 66.5 KiB | 00m00s [ 60/172] Installing libcap-ng-0:0.8.5- 100% | 69.2 MiB/s | 70.8 KiB | 00m00s [ 61/172] Installing audit-libs-0:4.1.1 100% | 186.3 MiB/s | 381.5 KiB | 00m00s [ 62/172] Installing pam-libs-0:1.7.1-3 100% | 126.0 MiB/s | 129.0 KiB | 00m00s [ 63/172] Installing libcap-0:2.76-3.fc 100% | 16.1 MiB/s | 214.3 KiB | 00m00s [ 64/172] Installing systemd-libs-0:258 100% | 332.1 MiB/s | 2.3 MiB | 00m00s [ 65/172] Installing libsmartcols-0:2.4 100% | 177.3 MiB/s | 181.6 KiB | 00m00s [ 66/172] Installing libsepol-0:3.9-2.f 100% | 267.9 MiB/s | 822.9 KiB | 00m00s [ 67/172] Installing libselinux-0:3.9-5 100% | 189.8 MiB/s | 194.4 KiB | 00m00s [ 68/172] Installing findutils-1:4.10.0 100% | 103.2 MiB/s | 1.9 MiB | 00m00s [ 69/172] Installing sed-0:4.9-5.fc43.x 100% | 52.8 MiB/s | 865.5 KiB | 00m00s [ 70/172] Installing libmount-0:2.41.1- 100% | 182.5 MiB/s | 373.7 KiB | 00m00s [ 71/172] Installing lz4-libs-0:1.10.0- 100% | 158.6 MiB/s | 162.5 KiB | 00m00s [ 72/172] Installing lua-libs-0:5.4.8-2 100% | 275.3 MiB/s | 281.9 KiB | 00m00s [ 73/172] Installing json-c-0:0.18-7.fc 100% | 82.0 MiB/s | 84.0 KiB | 00m00s [ 74/172] Installing libffi-0:3.5.1-2.f 100% | 83.0 MiB/s | 85.0 KiB | 00m00s [ 75/172] Installing p11-kit-0:0.25.8-1 100% | 109.1 MiB/s | 2.3 MiB | 00m00s [ 76/172] Installing alternatives-0:1.3 100% | 4.8 MiB/s | 63.8 KiB | 00m00s [ 77/172] Installing p11-kit-trust-0:0. 100% | 20.8 MiB/s | 448.2 KiB | 00m00s [ 78/172] Installing openssl-libs-1:3.5 100% | 356.1 MiB/s | 8.9 MiB | 00m00s [ 79/172] Installing coreutils-0:9.7-6. 100% | 165.2 MiB/s | 5.5 MiB | 00m00s [ 80/172] Installing ca-certificates-0: 100% | 1.9 MiB/s | 2.5 MiB | 00m01s [ 81/172] Installing gzip-0:1.13-4.fc43 100% | 25.7 MiB/s | 394.4 KiB | 00m00s [ 82/172] Installing rpm-sequoia-0:1.9. 100% | 354.1 MiB/s | 2.5 MiB | 00m00s [ 83/172] Installing libfsverity-0:1.6- 100% | 28.8 MiB/s | 29.5 KiB | 00m00s [ 84/172] Installing libevent-0:2.1.12- 100% | 288.7 MiB/s | 886.8 KiB | 00m00s [ 85/172] Installing zstd-0:1.5.7-2.fc4 100% | 100.6 MiB/s | 1.7 MiB | 00m00s [ 86/172] Installing util-linux-core-0: 100% | 82.2 MiB/s | 1.5 MiB | 00m00s [ 87/172] Installing tar-2:1.35-6.fc43. 100% | 140.9 MiB/s | 3.0 MiB | 00m00s [ 88/172] Installing libsemanage-0:3.9- 100% | 151.5 MiB/s | 310.2 KiB | 00m00s [ 89/172] Installing systemd-standalone 100% | 22.1 MiB/s | 294.1 KiB | 00m00s [ 90/172] Installing rpm-libs-0:6.0.0-1 100% | 304.4 MiB/s | 935.2 KiB | 00m00s [ 91/172] Installing libusb1-0:1.0.29-4 100% | 21.1 MiB/s | 172.9 KiB | 00m00s >>> Running sysusers scriptlet: tpm2-tss-0:4.1.3-8.fc43.x86_64 >>> Finished sysusers scriptlet: tpm2-tss-0:4.1.3-8.fc43.x86_64 >>> Scriptlet output: >>> Creating group 'tss' with GID 59. >>> Creating user 'tss' (Account used for TPM access) with UID 59 and GID 59. >>> [ 92/172] Installing tpm2-tss-0:4.1.3-8 100% | 262.0 MiB/s | 1.6 MiB | 00m00s [ 93/172] Installing ima-evm-utils-libs 100% | 60.5 MiB/s | 62.0 KiB | 00m00s [ 94/172] Installing gnupg2-gpg-agent-0 100% | 30.0 MiB/s | 675.4 KiB | 00m00s [ 95/172] Installing zip-0:3.0-44.fc43. 100% | 45.5 MiB/s | 698.4 KiB | 00m00s [ 96/172] Installing gnupg2-keyboxd-0:2 100% | 28.3 MiB/s | 202.7 KiB | 00m00s [ 97/172] Installing libpsl-0:0.21.5-6. 100% | 75.7 MiB/s | 77.5 KiB | 00m00s [ 98/172] Installing liblastlog2-0:2.41 100% | 7.0 MiB/s | 35.9 KiB | 00m00s [ 99/172] Installing libfdisk-0:2.41.1- 100% | 186.2 MiB/s | 381.4 KiB | 00m00s [100/172] Installing nettle-0:3.10.1-2. 100% | 258.4 MiB/s | 793.7 KiB | 00m00s [101/172] Installing gnutls-0:3.8.10-3. 100% | 349.0 MiB/s | 3.8 MiB | 00m00s [102/172] Installing libxml2-0:2.12.10- 100% | 94.7 MiB/s | 1.7 MiB | 00m00s [103/172] Installing libarchive-0:3.8.1 100% | 310.2 MiB/s | 953.1 KiB | 00m00s [104/172] Installing bzip2-0:1.0.8-21.f 100% | 7.5 MiB/s | 99.8 KiB | 00m00s [105/172] Installing add-determinism-0: 100% | 135.8 MiB/s | 2.4 MiB | 00m00s [106/172] Installing build-reproducibil 100% | 0.0 B/s | 1.0 KiB | 00m00s [107/172] Installing cpio-0:2.15-6.fc43 100% | 68.7 MiB/s | 1.1 MiB | 00m00s [108/172] Installing diffutils-0:3.12-3 100% | 91.8 MiB/s | 1.6 MiB | 00m00s [109/172] Installing ed-0:1.22.2-1.fc43 100% | 11.3 MiB/s | 150.4 KiB | 00m00s [110/172] Installing patch-0:2.8-2.fc43 100% | 16.9 MiB/s | 224.3 KiB | 00m00s [111/172] Installing libgomp-0:15.2.1-2 100% | 264.9 MiB/s | 542.5 KiB | 00m00s [112/172] Installing libtool-ltdl-0:2.5 100% | 69.6 MiB/s | 71.2 KiB | 00m00s [113/172] Installing gdbm-libs-1:1.23-1 100% | 128.5 MiB/s | 131.6 KiB | 00m00s [114/172] Installing cyrus-sasl-lib-0:2 100% | 127.6 MiB/s | 2.3 MiB | 00m00s [115/172] Installing openldap-0:2.6.10- 100% | 216.0 MiB/s | 663.7 KiB | 00m00s [116/172] Installing gnupg2-dirmngr-0:2 100% | 30.3 MiB/s | 621.1 KiB | 00m00s [117/172] Installing gnupg2-0:2.4.8-4.f 100% | 218.4 MiB/s | 6.6 MiB | 00m00s [118/172] Installing rpm-sign-libs-0:6. 100% | 39.6 MiB/s | 40.6 KiB | 00m00s [119/172] Installing gpgverify-0:2.2-3. 100% | 0.0 B/s | 9.4 KiB | 00m00s [120/172] Installing jansson-0:2.14-3.f 100% | 88.3 MiB/s | 90.5 KiB | 00m00s [121/172] Installing libpkgconf-0:2.3.0 100% | 77.4 MiB/s | 79.2 KiB | 00m00s [122/172] Installing pkgconf-0:2.3.0-3. 100% | 6.8 MiB/s | 91.0 KiB | 00m00s [123/172] Installing pkgconf-pkg-config 100% | 147.8 KiB/s | 1.8 KiB | 00m00s [124/172] Installing xxhash-libs-0:0.8. 100% | 89.4 MiB/s | 91.6 KiB | 00m00s [125/172] Installing libbrotli-0:1.1.0- 100% | 272.0 MiB/s | 835.6 KiB | 00m00s [126/172] Installing libnghttp2-0:1.66. 100% | 159.5 MiB/s | 163.3 KiB | 00m00s [127/172] Installing keyutils-libs-0:1. 100% | 54.4 MiB/s | 55.7 KiB | 00m00s [128/172] Installing libcom_err-0:1.47. 100% | 0.0 B/s | 64.2 KiB | 00m00s [129/172] Installing libverto-0:0.3.2-1 100% | 26.6 MiB/s | 27.2 KiB | 00m00s [130/172] Installing krb5-libs-0:1.21.3 100% | 327.4 MiB/s | 2.3 MiB | 00m00s [131/172] Installing libssh-0:0.11.3-1. 100% | 277.9 MiB/s | 569.2 KiB | 00m00s [132/172] Installing libcurl-0:8.15.0-2 100% | 294.4 MiB/s | 904.3 KiB | 00m00s [133/172] Installing curl-0:8.15.0-2.fc 100% | 20.2 MiB/s | 476.3 KiB | 00m00s [134/172] Installing rpm-0:6.0.0-1.fc43 100% | 75.7 MiB/s | 2.6 MiB | 00m00s [135/172] Installing efi-srpm-macros-0: 100% | 40.2 MiB/s | 41.1 KiB | 00m00s [136/172] Installing java-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [137/172] Installing lua-srpm-macros-0: 100% | 0.0 B/s | 1.9 KiB | 00m00s [138/172] Installing tree-sitter-srpm-m 100% | 0.0 B/s | 9.3 KiB | 00m00s [139/172] Installing zig-srpm-macros-0: 100% | 0.0 B/s | 1.7 KiB | 00m00s [140/172] Installing filesystem-srpm-ma 100% | 0.0 B/s | 38.9 KiB | 00m00s [141/172] Installing elfutils-default-y 100% | 408.6 KiB/s | 2.0 KiB | 00m00s [142/172] Installing elfutils-libs-0:0. 100% | 223.1 MiB/s | 685.2 KiB | 00m00s [143/172] Installing elfutils-debuginfo 100% | 5.6 MiB/s | 86.2 KiB | 00m00s [144/172] Installing elfutils-0:0.193-3 100% | 145.9 MiB/s | 2.9 MiB | 00m00s [145/172] Installing binutils-0:2.45-1. 100% | 316.0 MiB/s | 26.5 MiB | 00m00s [146/172] Installing gdb-minimal-0:16.3 100% | 270.5 MiB/s | 13.3 MiB | 00m00s [147/172] Installing debugedit-0:5.2-3. 100% | 15.2 MiB/s | 217.3 KiB | 00m00s [148/172] Installing rpm-build-libs-0:6 100% | 262.9 MiB/s | 269.2 KiB | 00m00s [149/172] Installing rust-srpm-macros-0 100% | 0.0 B/s | 5.6 KiB | 00m00s [150/172] Installing qt6-srpm-macros-0: 100% | 0.0 B/s | 740.0 B | 00m00s [151/172] Installing qt5-srpm-macros-0: 100% | 0.0 B/s | 776.0 B | 00m00s [152/172] Installing perl-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [153/172] Installing package-notes-srpm 100% | 0.0 B/s | 2.0 KiB | 00m00s [154/172] Installing openblas-srpm-macr 100% | 0.0 B/s | 392.0 B | 00m00s [155/172] Installing ocaml-srpm-macros- 100% | 0.0 B/s | 2.1 KiB | 00m00s [156/172] Installing kernel-srpm-macros 100% | 0.0 B/s | 2.3 KiB | 00m00s [157/172] Installing gnat-srpm-macros-0 100% | 0.0 B/s | 1.3 KiB | 00m00s [158/172] Installing ghc-srpm-macros-0: 100% | 0.0 B/s | 1.0 KiB | 00m00s [159/172] Installing gap-srpm-macros-0: 100% | 0.0 B/s | 2.6 KiB | 00m00s [160/172] Installing fpc-srpm-macros-0: 100% | 0.0 B/s | 420.0 B | 00m00s [161/172] Installing ansible-srpm-macro 100% | 0.0 B/s | 36.2 KiB | 00m00s [162/172] Installing rpm-build-0:6.0.0- 100% | 19.3 MiB/s | 296.5 KiB | 00m00s [163/172] Installing pyproject-srpm-mac 100% | 2.4 MiB/s | 2.5 KiB | 00m00s [164/172] Installing redhat-rpm-config- 100% | 92.3 MiB/s | 189.1 KiB | 00m00s [165/172] Installing forge-srpm-macros- 100% | 0.0 B/s | 40.3 KiB | 00m00s [166/172] Installing fonts-srpm-macros- 100% | 55.7 MiB/s | 57.0 KiB | 00m00s [167/172] Installing go-srpm-macros-0:3 100% | 61.6 MiB/s | 63.0 KiB | 00m00s [168/172] Installing python-srpm-macros 100% | 25.8 MiB/s | 52.8 KiB | 00m00s [169/172] Installing util-linux-0:2.41. 100% | 96.5 MiB/s | 3.6 MiB | 00m00s [170/172] Installing shadow-utils-2:4.1 100% | 128.0 MiB/s | 4.0 MiB | 00m00s [171/172] Installing which-0:2.23-3.fc4 100% | 6.4 MiB/s | 85.7 KiB | 00m00s [172/172] Installing info-0:7.2-6.fc43. 100% | 210.4 KiB/s | 354.3 KiB | 00m02s Complete! Finish: installing minimal buildroot with dnf5 Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: INFO: add-determinism-0.6.0-2.fc43.x86_64 alternatives-1.33-2.fc43.x86_64 ansible-srpm-macros-1-18.1.fc43.noarch audit-libs-4.1.1-2.fc43.x86_64 bash-5.3.0-2.fc43.x86_64 binutils-2.45-1.fc43.x86_64 build-reproducibility-srpm-macros-0.6.0-2.fc43.noarch bzip2-1.0.8-21.fc43.x86_64 bzip2-libs-1.0.8-21.fc43.x86_64 ca-certificates-2025.2.80_v9.0.304-1.1.fc43.noarch coreutils-9.7-6.fc43.x86_64 coreutils-common-9.7-6.fc43.x86_64 cpio-2.15-6.fc43.x86_64 crypto-policies-20250714-5.gitcd6043a.fc43.noarch curl-8.15.0-2.fc43.x86_64 cyrus-sasl-lib-2.1.28-33.fc43.x86_64 debugedit-5.2-3.fc43.x86_64 diffutils-3.12-3.fc43.x86_64 dwz-0.16-2.fc43.x86_64 ed-1.22.2-1.fc43.x86_64 efi-srpm-macros-6-4.fc43.noarch elfutils-0.193-3.fc43.x86_64 elfutils-debuginfod-client-0.193-3.fc43.x86_64 elfutils-default-yama-scope-0.193-3.fc43.noarch elfutils-libelf-0.193-3.fc43.x86_64 elfutils-libs-0.193-3.fc43.x86_64 fedora-gpg-keys-43-0.4.noarch fedora-release-43-0.22.noarch fedora-release-common-43-0.22.noarch fedora-release-identity-basic-43-0.22.noarch fedora-repos-43-0.4.noarch file-5.46-8.fc43.x86_64 file-libs-5.46-8.fc43.x86_64 filesystem-3.18-50.fc43.x86_64 filesystem-srpm-macros-3.18-50.fc43.noarch findutils-4.10.0-6.fc43.x86_64 fonts-srpm-macros-2.0.5-23.fc43.noarch forge-srpm-macros-0.4.0-3.fc43.noarch fpc-srpm-macros-1.3-15.fc43.noarch gap-srpm-macros-1-1.fc43.noarch gawk-5.3.2-2.fc43.x86_64 gdb-minimal-16.3-6.fc43.x86_64 gdbm-libs-1.23-10.fc43.x86_64 ghc-srpm-macros-1.9.2-3.fc43.noarch glibc-2.42-4.fc43.x86_64 glibc-common-2.42-4.fc43.x86_64 glibc-gconv-extra-2.42-4.fc43.x86_64 glibc-minimal-langpack-2.42-4.fc43.x86_64 gmp-6.3.0-4.fc43.x86_64 gnat-srpm-macros-6-8.fc43.noarch gnulib-l10n-20241231-1.fc43.noarch gnupg2-2.4.8-4.fc43.x86_64 gnupg2-dirmngr-2.4.8-4.fc43.x86_64 gnupg2-gpg-agent-2.4.8-4.fc43.x86_64 gnupg2-gpgconf-2.4.8-4.fc43.x86_64 gnupg2-keyboxd-2.4.8-4.fc43.x86_64 gnupg2-verify-2.4.8-4.fc43.x86_64 gnutls-3.8.10-3.fc43.x86_64 go-srpm-macros-3.8.0-1.fc43.noarch gpg-pubkey-c6e7f081cf80e13146676e88829b606631645531-66b6dccf gpgverify-2.2-3.fc43.noarch grep-3.12-2.fc43.x86_64 gzip-1.13-4.fc43.x86_64 ima-evm-utils-libs-1.6.2-6.fc43.x86_64 info-7.2-6.fc43.x86_64 jansson-2.14-3.fc43.x86_64 java-srpm-macros-1-7.fc43.noarch json-c-0.18-7.fc43.x86_64 kernel-srpm-macros-1.0-27.fc43.noarch keyutils-libs-1.6.3-6.fc43.x86_64 krb5-libs-1.21.3-7.fc43.x86_64 libacl-2.3.2-4.fc43.x86_64 libarchive-3.8.1-3.fc43.x86_64 libassuan-2.5.7-4.fc43.x86_64 libattr-2.5.2-6.fc43.x86_64 libblkid-2.41.1-17.fc43.x86_64 libbrotli-1.1.0-10.fc43.x86_64 libcap-2.76-3.fc43.x86_64 libcap-ng-0.8.5-8.fc43.x86_64 libcom_err-1.47.3-2.fc43.x86_64 libcurl-8.15.0-2.fc43.x86_64 libeconf-0.7.9-2.fc43.x86_64 libevent-2.1.12-16.fc43.x86_64 libfdisk-2.41.1-17.fc43.x86_64 libffi-3.5.1-2.fc43.x86_64 libfsverity-1.6-3.fc43.x86_64 libgcc-15.2.1-2.fc43.x86_64 libgcrypt-1.11.1-2.fc43.x86_64 libgomp-15.2.1-2.fc43.x86_64 libgpg-error-1.55-2.fc43.x86_64 libidn2-2.3.8-2.fc43.x86_64 libksba-1.6.7-4.fc43.x86_64 liblastlog2-2.41.1-17.fc43.x86_64 libmount-2.41.1-17.fc43.x86_64 libnghttp2-1.66.0-2.fc43.x86_64 libpkgconf-2.3.0-3.fc43.x86_64 libpsl-0.21.5-6.fc43.x86_64 libselinux-3.9-5.fc43.x86_64 libsemanage-3.9-4.fc43.x86_64 libsepol-3.9-2.fc43.x86_64 libsmartcols-2.41.1-17.fc43.x86_64 libssh-0.11.3-1.fc43.x86_64 libssh-config-0.11.3-1.fc43.noarch libstdc++-15.2.1-2.fc43.x86_64 libtasn1-4.20.0-2.fc43.x86_64 libtool-ltdl-2.5.4-7.fc43.x86_64 libunistring-1.1-10.fc43.x86_64 libusb1-1.0.29-4.fc43.x86_64 libuuid-2.41.1-17.fc43.x86_64 libverto-0.3.2-11.fc43.x86_64 libxcrypt-4.4.38-8.fc43.x86_64 libxml2-2.12.10-5.fc43.x86_64 libzstd-1.5.7-2.fc43.x86_64 lua-libs-5.4.8-2.fc43.x86_64 lua-srpm-macros-1-16.fc43.noarch lz4-libs-1.10.0-3.fc43.x86_64 mpfr-4.2.2-2.fc43.x86_64 ncurses-base-6.5-7.20250614.fc43.noarch ncurses-libs-6.5-7.20250614.fc43.x86_64 nettle-3.10.1-2.fc43.x86_64 npth-1.8-3.fc43.x86_64 ocaml-srpm-macros-11-2.fc43.noarch openblas-srpm-macros-2-20.fc43.noarch openldap-2.6.10-4.fc43.x86_64 openssl-libs-3.5.1-2.fc43.x86_64 p11-kit-0.25.8-1.fc43.x86_64 p11-kit-trust-0.25.8-1.fc43.x86_64 package-notes-srpm-macros-0.5-14.fc43.noarch pam-libs-1.7.1-3.fc43.x86_64 patch-2.8-2.fc43.x86_64 pcre2-10.46-1.fc43.x86_64 pcre2-syntax-10.46-1.fc43.noarch perl-srpm-macros-1-60.fc43.noarch pkgconf-2.3.0-3.fc43.x86_64 pkgconf-m4-2.3.0-3.fc43.noarch pkgconf-pkg-config-2.3.0-3.fc43.x86_64 popt-1.19-9.fc43.x86_64 publicsuffix-list-dafsa-20250616-2.fc43.noarch pyproject-srpm-macros-1.18.4-1.fc43.noarch python-srpm-macros-3.14-5.fc43.noarch qt5-srpm-macros-5.15.17-2.fc43.noarch qt6-srpm-macros-6.9.2-1.fc43.noarch readline-8.3-2.fc43.x86_64 redhat-rpm-config-343-11.fc43.noarch rpm-6.0.0-1.fc43.x86_64 rpm-build-6.0.0-1.fc43.x86_64 rpm-build-libs-6.0.0-1.fc43.x86_64 rpm-libs-6.0.0-1.fc43.x86_64 rpm-sequoia-1.9.0-2.fc43.x86_64 rpm-sign-libs-6.0.0-1.fc43.x86_64 rust-srpm-macros-26.4-1.fc43.noarch sed-4.9-5.fc43.x86_64 setup-2.15.0-26.fc43.noarch shadow-utils-4.18.0-3.fc43.x86_64 sqlite-libs-3.50.2-2.fc43.x86_64 systemd-libs-258-1.fc43.x86_64 systemd-standalone-sysusers-258-1.fc43.x86_64 tar-1.35-6.fc43.x86_64 tpm2-tss-4.1.3-8.fc43.x86_64 tree-sitter-srpm-macros-0.4.2-1.fc43.noarch unzip-6.0-67.fc43.x86_64 util-linux-2.41.1-17.fc43.x86_64 util-linux-core-2.41.1-17.fc43.x86_64 which-2.23-3.fc43.x86_64 xxhash-libs-0.8.3-3.fc43.x86_64 xz-5.8.1-2.fc43.x86_64 xz-libs-5.8.1-2.fc43.x86_64 zig-srpm-macros-1-5.fc43.noarch zip-3.0-44.fc43.x86_64 zlib-ng-compat-2.2.5-2.fc43.x86_64 zstd-1.5.7-2.fc43.x86_64 Start: buildsrpm Start: rpmbuild -bs Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc43.src.rpm Finish: rpmbuild -bs INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-43-x86_64-1759552642.548045/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names Finish: buildsrpm INFO: Done(/var/lib/copr-rpmbuild/workspace/workdir-n_3z7qpl/ollama/ollama.spec) Config(child) 0 minutes 25 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot INFO: Start(/var/lib/copr-rpmbuild/results/ollama-0.12.3-1.fc43.src.rpm) Config(fedora-43-x86_64) Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-43-x86_64-bootstrap-1759552642.548045/root. INFO: reusing tmpfs at /var/lib/mock/fedora-43-x86_64-bootstrap-1759552642.548045/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-43-x86_64-1759552642.548045/root. INFO: calling preinit hooks INFO: enabled root cache Start: unpacking root cache Finish: unpacking root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-6.0.0-1.fc43.x86_64 rpm-sequoia-1.9.0-2.fc43.x86_64 dnf5-5.2.17.0-2.fc43.x86_64 dnf5-plugins-5.2.17.0-2.fc43.x86_64 Finish: chroot init Start: build phase for ollama-0.12.3-1.fc43.src.rpm Start: build setup for ollama-0.12.3-1.fc43.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc43.src.rpm Updating and loading repositories: Additional repo https_developer_downlo 100% | 19.4 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 19.4 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 7.4 KiB/s | 1.5 KiB | 00m00s fedora 100% | 36.0 KiB/s | 28.2 KiB | 00m01s updates 100% | 60.9 KiB/s | 30.2 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: cmake x86_64 3.31.6-4.fc43 fedora 34.5 MiB gcc-c++ x86_64 15.2.1-2.fc43 fedora 41.4 MiB go-rpm-macros x86_64 3.8.0-1.fc43 fedora 96.6 KiB go-vendor-tools noarch 0.8.0-5.fc43 fedora 339.3 KiB hipblas-devel x86_64 6.4.1-4.fc43 fedora 3.1 MiB rocblas-devel x86_64 6.4.2-4.fc43 fedora 2.8 MiB rocm-comgr-devel x86_64 19-14.rocm6.4.2.fc43 fedora 98.2 KiB rocm-hip-devel x86_64 6.4.2-2.fc43 fedora 2.8 MiB rocm-runtime-devel x86_64 6.4.2-2.fc43 fedora 571.4 KiB rocminfo x86_64 6.4.0-2.fc43 fedora 77.3 KiB systemd-rpm-macros noarch 258-1.fc43 fedora 8.5 KiB Installing dependencies: annobin-docs noarch 12.99-1.fc43 fedora 98.9 KiB annobin-plugin-gcc x86_64 12.99-1.fc43 fedora 1.0 MiB cmake-data noarch 3.31.6-4.fc43 fedora 8.5 MiB cmake-filesystem x86_64 3.31.6-4.fc43 fedora 0.0 B cmake-rpm-macros noarch 3.31.6-4.fc43 fedora 7.7 KiB cpp x86_64 15.2.1-2.fc43 fedora 37.9 MiB emacs-filesystem noarch 1:30.0-5.fc43 fedora 0.0 B expat x86_64 2.7.2-1.fc43 fedora 298.6 KiB gcc x86_64 15.2.1-2.fc43 fedora 111.9 MiB gcc-plugin-annobin x86_64 15.2.1-2.fc43 fedora 57.2 KiB git x86_64 2.51.0-2.fc43 fedora 56.4 KiB git-core x86_64 2.51.0-2.fc43 fedora 23.6 MiB git-core-doc noarch 2.51.0-2.fc43 fedora 17.7 MiB glibc-devel x86_64 2.42-4.fc43 fedora 2.3 MiB go-filesystem x86_64 3.8.0-1.fc43 fedora 0.0 B golang x86_64 1.25.1-2.fc43 fedora 9.6 MiB golang-bin x86_64 1.25.1-2.fc43 fedora 67.2 MiB golang-src noarch 1.25.1-2.fc43 fedora 81.4 MiB golist x86_64 0.10.4-8.fc43 fedora 4.5 MiB groff-base x86_64 1.23.0-10.fc43 fedora 3.8 MiB hipblas x86_64 6.4.1-4.fc43 fedora 1.1 MiB hipblas-common-devel noarch 6.4.0-2.fc43 fedora 16.4 KiB hipcc x86_64 19-14.rocm6.4.2.fc43 fedora 652.9 KiB hwdata noarch 0.399-1.fc43 fedora 9.6 MiB jsoncpp x86_64 1.9.6-2.fc43 fedora 257.6 KiB kernel-headers x86_64 6.17.0-63.fc43 fedora 6.7 MiB kmod x86_64 34.2-2.fc43 fedora 247.2 KiB less x86_64 679-2.fc43 fedora 406.1 KiB libcbor x86_64 0.12.0-6.fc43 fedora 77.8 KiB libdrm x86_64 2.4.125-2.fc43 fedora 395.8 KiB libedit x86_64 3.1-56.20250104cvs.fc43 fedora 240.1 KiB libfido2 x86_64 1.16.0-3.fc43 fedora 238.5 KiB libmpc x86_64 1.3.1-8.fc43 fedora 160.6 KiB libpciaccess x86_64 0.16-16.fc43 fedora 44.5 KiB libstdc++-devel x86_64 15.2.1-2.fc43 fedora 37.3 MiB libuv x86_64 1:1.51.0-2.fc43 fedora 570.2 KiB libxcrypt-devel x86_64 4.4.38-8.fc43 fedora 30.8 KiB make x86_64 1:4.4.1-11.fc43 fedora 1.8 MiB mpdecimal x86_64 4.0.1-2.fc43 fedora 217.2 KiB ncurses x86_64 6.5-7.20250614.fc43 fedora 609.8 KiB numactl-libs x86_64 2.0.19-3.fc43 fedora 56.9 KiB openssh x86_64 10.0p1-5.fc43 fedora 1.4 MiB openssh-clients x86_64 10.0p1-5.fc43 fedora 2.6 MiB perl-AutoLoader noarch 5.74-520.fc43 fedora 20.6 KiB perl-B x86_64 1.89-520.fc43 fedora 501.3 KiB perl-Carp noarch 1.54-520.fc43 fedora 46.6 KiB perl-Class-Struct noarch 0.68-520.fc43 fedora 25.4 KiB perl-Data-Dumper x86_64 2.191-521.fc43 fedora 115.6 KiB perl-Digest noarch 1.20-520.fc43 fedora 35.3 KiB perl-Digest-MD5 x86_64 2.59-520.fc43 fedora 59.7 KiB perl-DynaLoader x86_64 1.57-520.fc43 fedora 32.1 KiB perl-Encode x86_64 4:3.21-520.fc43 fedora 4.7 MiB perl-Errno x86_64 1.38-520.fc43 fedora 8.4 KiB perl-Error noarch 1:0.17030-2.fc43 fedora 76.7 KiB perl-Exporter noarch 5.79-520.fc43 fedora 54.3 KiB perl-Fcntl x86_64 1.20-520.fc43 fedora 48.8 KiB perl-File-Basename noarch 2.86-520.fc43 fedora 14.0 KiB perl-File-Copy noarch 2.41-520.fc43 fedora 19.7 KiB perl-File-Path noarch 2.18-520.fc43 fedora 63.5 KiB perl-File-Temp noarch 1:0.231.100-520.fc43 fedora 162.3 KiB perl-File-Which noarch 1.27-14.fc43 fedora 30.4 KiB perl-File-stat noarch 1.14-520.fc43 fedora 12.5 KiB perl-FileHandle noarch 2.05-520.fc43 fedora 9.4 KiB perl-Getopt-Long noarch 1:2.58-520.fc43 fedora 144.5 KiB perl-Getopt-Std noarch 1.14-520.fc43 fedora 11.2 KiB perl-Git noarch 2.51.0-2.fc43 fedora 64.4 KiB perl-HTTP-Tiny noarch 0.090-521.fc43 fedora 154.4 KiB perl-IO x86_64 1.55-520.fc43 fedora 147.4 KiB perl-IO-Socket-IP noarch 0.43-521.fc43 fedora 100.3 KiB perl-IO-Socket-SSL noarch 2.095-2.fc43 fedora 714.5 KiB perl-IPC-Open3 noarch 1.24-520.fc43 fedora 27.7 KiB perl-MIME-Base32 noarch 1.303-24.fc43 fedora 30.7 KiB perl-MIME-Base64 x86_64 3.16-520.fc43 fedora 42.0 KiB perl-Net-SSLeay x86_64 1.94-11.fc43 fedora 1.3 MiB perl-POSIX x86_64 2.23-520.fc43 fedora 231.4 KiB perl-PathTools x86_64 3.94-520.fc43 fedora 180.0 KiB perl-Pod-Escapes noarch 1:1.07-520.fc43 fedora 24.9 KiB perl-Pod-Perldoc noarch 3.28.01-521.fc43 fedora 163.7 KiB perl-Pod-Simple noarch 1:3.47-3.fc43 fedora 565.3 KiB perl-Pod-Usage noarch 4:2.05-520.fc43 fedora 86.3 KiB perl-Scalar-List-Utils x86_64 5:1.70-1.fc43 fedora 144.9 KiB perl-SelectSaver noarch 1.02-520.fc43 fedora 2.2 KiB perl-Socket x86_64 4:2.040-2.fc43 fedora 120.3 KiB perl-Storable x86_64 1:3.37-521.fc43 fedora 231.2 KiB perl-Symbol noarch 1.09-520.fc43 fedora 6.8 KiB perl-Term-ANSIColor noarch 5.01-521.fc43 fedora 97.5 KiB perl-Term-Cap noarch 1.18-520.fc43 fedora 29.3 KiB perl-TermReadKey x86_64 2.38-26.fc43 fedora 64.0 KiB perl-Text-ParseWords noarch 3.31-520.fc43 fedora 13.6 KiB perl-Text-Tabs+Wrap noarch 2024.001-520.fc43 fedora 22.6 KiB perl-Time-Local noarch 2:1.350-520.fc43 fedora 69.0 KiB perl-URI noarch 5.34-1.fc43 fedora 268.0 KiB perl-base noarch 2.27-520.fc43 fedora 12.6 KiB perl-constant noarch 1.33-521.fc43 fedora 26.2 KiB perl-if noarch 0.61.000-520.fc43 fedora 5.8 KiB perl-interpreter x86_64 4:5.42.0-520.fc43 fedora 118.6 KiB perl-lib x86_64 0.65-520.fc43 fedora 8.5 KiB perl-libnet noarch 3.15-521.fc43 fedora 289.4 KiB perl-libs x86_64 4:5.42.0-520.fc43 fedora 11.5 MiB perl-locale noarch 1.13-520.fc43 fedora 6.1 KiB perl-mro x86_64 1.29-520.fc43 fedora 41.6 KiB perl-overload noarch 1.40-520.fc43 fedora 71.6 KiB perl-overloading noarch 0.02-520.fc43 fedora 4.9 KiB perl-parent noarch 1:0.244-520.fc43 fedora 10.3 KiB perl-podlators noarch 1:6.0.2-520.fc43 fedora 317.5 KiB perl-vars noarch 1.05-520.fc43 fedora 3.9 KiB python-pip-wheel noarch 25.1.1-18.fc43 fedora 1.2 MiB python3 x86_64 3.14.0~rc3-1.fc43 fedora 28.9 KiB python3-boolean.py noarch 5.0-9.fc43 fedora 635.8 KiB python3-libs x86_64 3.14.0~rc3-1.fc43 fedora 43.0 MiB python3-license-expression noarch 30.4.4-3.fc43 fedora 1.2 MiB python3-zstarfile noarch 0.3.0-4.fc43 fedora 25.8 KiB rhash x86_64 1.4.5-3.fc43 fedora 351.1 KiB rocblas x86_64 6.4.2-4.fc43 fedora 3.9 GiB rocm-clang x86_64 19-14.rocm6.4.2.fc43 fedora 70.2 MiB rocm-clang-devel x86_64 19-14.rocm6.4.2.fc43 fedora 23.3 MiB rocm-clang-libs x86_64 19-14.rocm6.4.2.fc43 fedora 98.4 MiB rocm-clang-runtime-devel x86_64 19-14.rocm6.4.2.fc43 fedora 7.8 MiB rocm-comgr x86_64 19-14.rocm6.4.2.fc43 fedora 123.9 MiB rocm-device-libs x86_64 19-14.rocm6.4.2.fc43 fedora 3.2 MiB rocm-hip x86_64 6.4.2-2.fc43 fedora 24.9 MiB rocm-libc++ x86_64 19-14.rocm6.4.2.fc43 fedora 1.2 MiB rocm-libc++-devel x86_64 19-14.rocm6.4.2.fc43 fedora 7.5 MiB rocm-lld x86_64 19-14.rocm6.4.2.fc43 fedora 5.7 MiB rocm-llvm x86_64 19-14.rocm6.4.2.fc43 fedora 48.5 MiB rocm-llvm-devel x86_64 19-14.rocm6.4.2.fc43 fedora 25.3 MiB rocm-llvm-filesystem x86_64 19-14.rocm6.4.2.fc43 fedora 0.0 B rocm-llvm-libs x86_64 19-14.rocm6.4.2.fc43 fedora 84.8 MiB rocm-llvm-static x86_64 19-14.rocm6.4.2.fc43 fedora 1.8 GiB rocm-runtime x86_64 6.4.2-2.fc43 fedora 3.1 MiB rocsolver x86_64 6.4.2-3.fc43 fedora 987.8 MiB tzdata noarch 2025b-3.fc43 fedora 1.6 MiB vim-filesystem noarch 2:9.1.1775-1.fc43 fedora 40.0 B zlib-ng-compat-devel x86_64 2.2.5-2.fc43 fedora 107.0 KiB Transaction Summary: Installing: 145 packages Total size of inbound packages is 2 GiB. Need to download 2 GiB. After this operation, 8 GiB extra will be used (install 8 GiB, remove 0 B). [ 1/145] go-vendor-tools-0:0.8.0-5.fc4 100% | 1.8 MiB/s | 129.4 KiB | 00m00s [ 2/145] go-rpm-macros-0:3.8.0-1.fc43. 100% | 518.5 KiB/s | 38.4 KiB | 00m00s [ 3/145] hipblas-devel-0:6.4.1-4.fc43. 100% | 1.1 MiB/s | 105.6 KiB | 00m00s [ 4/145] rocm-comgr-devel-0:19-14.rocm 100% | 1.2 MiB/s | 31.8 KiB | 00m00s [ 5/145] rocblas-devel-0:6.4.2-4.fc43. 100% | 2.9 MiB/s | 111.5 KiB | 00m00s [ 6/145] rocm-runtime-devel-0:6.4.2-2. 100% | 4.0 MiB/s | 93.4 KiB | 00m00s [ 7/145] rocminfo-0:6.4.0-2.fc43.x86_6 100% | 1.2 MiB/s | 37.8 KiB | 00m00s [ 8/145] systemd-rpm-macros-0:258-1.fc 100% | 657.2 KiB/s | 14.5 KiB | 00m00s [ 9/145] rocm-hip-devel-0:6.4.2-2.fc43 100% | 3.6 MiB/s | 248.6 KiB | 00m00s [ 10/145] go-filesystem-0:3.8.0-1.fc43. 100% | 232.9 KiB/s | 8.9 KiB | 00m00s [ 11/145] golang-0:1.25.1-2.fc43.x86_64 100% | 8.2 MiB/s | 1.2 MiB | 00m00s [ 12/145] golist-0:0.10.4-8.fc43.x86_64 100% | 8.2 MiB/s | 1.6 MiB | 00m00s [ 13/145] python3-license-expression-0: 100% | 3.5 MiB/s | 139.9 KiB | 00m00s [ 14/145] python3-zstarfile-0:0.3.0-4.f 100% | 983.6 KiB/s | 19.7 KiB | 00m00s [ 15/145] cmake-filesystem-0:3.31.6-4.f 100% | 188.9 KiB/s | 15.5 KiB | 00m00s [ 16/145] hipblas-0:6.4.1-4.fc43.x86_64 100% | 2.8 MiB/s | 161.3 KiB | 00m00s [ 17/145] hipblas-common-devel-0:6.4.0- 100% | 436.6 KiB/s | 13.1 KiB | 00m00s [ 18/145] cmake-0:3.31.6-4.fc43.x86_64 100% | 16.0 MiB/s | 12.2 MiB | 00m01s [ 19/145] gcc-c++-0:15.2.1-2.fc43.x86_6 100% | 17.3 MiB/s | 15.3 MiB | 00m01s [ 20/145] rocm-device-libs-0:19-14.rocm 100% | 6.4 MiB/s | 503.6 KiB | 00m00s [ 21/145] perl-File-Basename-0:2.86-520 100% | 1.0 MiB/s | 17.2 KiB | 00m00s [ 22/145] perl-File-Copy-0:2.41-520.fc4 100% | 529.7 KiB/s | 20.1 KiB | 00m00s [ 23/145] perl-File-Which-0:1.27-14.fc4 100% | 497.6 KiB/s | 21.4 KiB | 00m00s [ 24/145] perl-Getopt-Std-0:1.14-520.fc 100% | 981.6 KiB/s | 15.7 KiB | 00m00s [ 25/145] perl-PathTools-0:3.94-520.fc4 100% | 5.3 MiB/s | 87.2 KiB | 00m00s [ 26/145] perl-Scalar-List-Utils-5:1.70 100% | 4.6 MiB/s | 75.0 KiB | 00m00s [ 27/145] perl-URI-0:5.34-1.fc43.noarch 100% | 5.8 MiB/s | 149.1 KiB | 00m00s [ 28/145] perl-interpreter-4:5.42.0-520 100% | 4.4 MiB/s | 72.4 KiB | 00m00s [ 29/145] rocm-hip-0:6.4.2-2.fc43.x86_6 100% | 15.7 MiB/s | 9.5 MiB | 00m01s [ 30/145] rocm-comgr-0:19-14.rocm6.4.2. 100% | 29.5 MiB/s | 30.5 MiB | 00m01s [ 31/145] kmod-0:34.2-2.fc43.x86_64 100% | 3.0 MiB/s | 132.7 KiB | 00m00s [ 32/145] rocm-runtime-0:6.4.2-2.fc43.x 100% | 6.8 MiB/s | 649.5 KiB | 00m00s [ 33/145] expat-0:2.7.2-1.fc43.x86_64 100% | 3.3 MiB/s | 118.9 KiB | 00m00s [ 34/145] jsoncpp-0:1.9.6-2.fc43.x86_64 100% | 2.7 MiB/s | 101.1 KiB | 00m00s [ 35/145] libuv-1:1.51.0-2.fc43.x86_64 100% | 4.7 MiB/s | 266.1 KiB | 00m00s [ 36/145] make-1:4.4.1-11.fc43.x86_64 100% | 7.8 MiB/s | 585.2 KiB | 00m00s [ 37/145] rhash-0:1.4.5-3.fc43.x86_64 100% | 3.1 MiB/s | 197.9 KiB | 00m00s [ 38/145] cmake-data-0:3.31.6-4.fc43.no 100% | 7.6 MiB/s | 2.5 MiB | 00m00s [ 39/145] libmpc-0:1.3.1-8.fc43.x86_64 100% | 2.1 MiB/s | 70.4 KiB | 00m00s [ 40/145] golang-bin-0:1.25.1-2.fc43.x8 100% | 21.8 MiB/s | 17.7 MiB | 00m01s [ 41/145] gcc-0:15.2.1-2.fc43.x86_64 100% | 35.6 MiB/s | 39.7 MiB | 00m01s [ 42/145] python3-boolean.py-0:5.0-9.fc 100% | 5.7 MiB/s | 123.4 KiB | 00m00s [ 43/145] rocblas-0:6.4.2-4.fc43.x86_64 100% | 80.1 MiB/s | 258.3 MiB | 00m03s [ 44/145] golang-src-0:1.25.1-2.fc43.no 100% | 15.4 MiB/s | 13.6 MiB | 00m01s [ 45/145] rocm-lld-0:19-14.rocm6.4.2.fc 100% | 9.5 MiB/s | 1.5 MiB | 00m00s [ 46/145] rocm-clang-devel-0:19-14.rocm 100% | 10.6 MiB/s | 2.6 MiB | 00m00s [ 47/145] perl-Carp-0:1.54-520.fc43.noa 100% | 1.8 MiB/s | 28.7 KiB | 00m00s [ 48/145] perl-Exporter-0:5.79-520.fc43 100% | 1.9 MiB/s | 30.9 KiB | 00m00s [ 49/145] perl-overload-0:1.40-520.fc43 100% | 2.8 MiB/s | 45.6 KiB | 00m00s [ 50/145] perl-base-0:2.27-520.fc43.noa 100% | 1.0 MiB/s | 16.2 KiB | 00m00s [ 51/145] perl-constant-0:1.33-521.fc43 100% | 1.4 MiB/s | 22.8 KiB | 00m00s [ 52/145] perl-Errno-0:1.38-520.fc43.x8 100% | 934.0 KiB/s | 14.9 KiB | 00m00s [ 53/145] perl-libs-4:5.42.0-520.fc43.x 100% | 8.7 MiB/s | 2.6 MiB | 00m00s [ 54/145] perl-Data-Dumper-0:2.191-521. 100% | 3.2 MiB/s | 56.3 KiB | 00m00s [ 55/145] perl-MIME-Base32-0:1.303-24.f 100% | 1.2 MiB/s | 20.4 KiB | 00m00s [ 56/145] perl-MIME-Base64-0:3.16-520.f 100% | 1.8 MiB/s | 29.7 KiB | 00m00s [ 57/145] perl-libnet-0:3.15-521.fc43.n 100% | 1.5 MiB/s | 128.3 KiB | 00m00s [ 58/145] perl-parent-1:0.244-520.fc43. 100% | 925.3 KiB/s | 14.8 KiB | 00m00s [ 59/145] hipcc-0:19-14.rocm6.4.2.fc43. 100% | 4.1 MiB/s | 133.6 KiB | 00m00s [ 60/145] numactl-libs-0:2.0.19-3.fc43. 100% | 776.8 KiB/s | 31.1 KiB | 00m00s [ 61/145] libdrm-0:2.4.125-2.fc43.x86_6 100% | 2.6 MiB/s | 161.3 KiB | 00m00s [ 62/145] emacs-filesystem-1:30.0-5.fc4 100% | 197.1 KiB/s | 7.5 KiB | 00m00s [ 63/145] vim-filesystem-2:9.1.1775-1.f 100% | 514.4 KiB/s | 15.4 KiB | 00m00s [ 64/145] cpp-0:15.2.1-2.fc43.x86_64 100% | 18.4 MiB/s | 12.9 MiB | 00m01s [ 65/145] rocm-clang-0:19-14.rocm6.4.2. 100% | 15.8 MiB/s | 16.0 MiB | 00m01s [ 66/145] rocm-clang-libs-0:19-14.rocm6 100% | 36.1 MiB/s | 22.8 MiB | 00m01s [ 67/145] rocm-llvm-static-0:19-14.rocm 100% | 74.9 MiB/s | 260.8 MiB | 00m03s [ 68/145] rocm-llvm-devel-0:19-14.rocm6 100% | 10.6 MiB/s | 4.1 MiB | 00m00s [ 69/145] perl-mro-0:1.29-520.fc43.x86_ 100% | 1.7 MiB/s | 29.9 KiB | 00m00s [ 70/145] perl-overloading-0:0.02-520.f 100% | 806.8 KiB/s | 12.9 KiB | 00m00s [ 71/145] perl-DynaLoader-0:1.57-520.fc 100% | 1.6 MiB/s | 26.0 KiB | 00m00s [ 72/145] rocm-llvm-libs-0:19-14.rocm6. 100% | 24.2 MiB/s | 20.2 MiB | 00m01s [ 73/145] perl-B-0:1.89-520.fc43.x86_64 100% | 2.5 MiB/s | 177.7 KiB | 00m00s [ 74/145] perl-Digest-MD5-0:2.59-520.fc 100% | 2.1 MiB/s | 35.8 KiB | 00m00s [ 75/145] perl-Fcntl-0:1.20-520.fc43.x8 100% | 1.8 MiB/s | 29.8 KiB | 00m00s [ 76/145] perl-FileHandle-0:2.05-520.fc 100% | 968.8 KiB/s | 15.5 KiB | 00m00s [ 77/145] perl-IO-0:1.55-520.fc43.x86_6 100% | 4.7 MiB/s | 82.2 KiB | 00m00s [ 78/145] perl-POSIX-0:2.23-520.fc43.x8 100% | 6.0 MiB/s | 97.8 KiB | 00m00s [ 79/145] perl-IO-Socket-IP-0:0.43-521. 100% | 2.4 MiB/s | 42.1 KiB | 00m00s [ 80/145] perl-Socket-4:2.040-2.fc43.x8 100% | 3.4 MiB/s | 54.9 KiB | 00m00s [ 81/145] perl-Symbol-0:1.09-520.fc43.n 100% | 887.7 KiB/s | 14.2 KiB | 00m00s [ 82/145] perl-Time-Local-2:1.350-520.f 100% | 2.2 MiB/s | 34.4 KiB | 00m00s [ 83/145] libpciaccess-0:0.16-16.fc43.x 100% | 238.0 KiB/s | 26.2 KiB | 00m00s [ 84/145] git-0:2.51.0-2.fc43.x86_64 100% | 387.4 KiB/s | 41.1 KiB | 00m00s [ 85/145] rocm-clang-runtime-devel-0:19 100% | 8.3 MiB/s | 631.1 KiB | 00m00s [ 86/145] rocm-libc++-devel-0:19-14.roc 100% | 9.4 MiB/s | 1.1 MiB | 00m00s [ 87/145] rocm-libc++-0:19-14.rocm6.4.2 100% | 6.5 MiB/s | 345.8 KiB | 00m00s [ 88/145] rocm-llvm-filesystem-0:19-14. 100% | 1.4 MiB/s | 24.7 KiB | 00m00s [ 89/145] perl-vars-0:1.05-520.fc43.noa 100% | 764.0 KiB/s | 13.0 KiB | 00m00s [ 90/145] perl-if-0:0.61.000-520.fc43.n 100% | 823.8 KiB/s | 14.0 KiB | 00m00s [ 91/145] perl-Digest-0:1.20-520.fc43.n 100% | 1.5 MiB/s | 24.8 KiB | 00m00s [ 92/145] perl-File-stat-0:1.14-520.fc4 100% | 1.0 MiB/s | 17.1 KiB | 00m00s [ 93/145] perl-SelectSaver-0:1.02-520.f 100% | 732.7 KiB/s | 11.7 KiB | 00m00s [ 94/145] perl-locale-0:1.13-520.fc43.n 100% | 900.3 KiB/s | 13.5 KiB | 00m00s [ 95/145] hwdata-0:0.399-1.fc43.noarch 100% | 8.9 MiB/s | 1.7 MiB | 00m00s [ 96/145] git-core-0:2.51.0-2.fc43.x86_ 100% | 48.1 MiB/s | 5.0 MiB | 00m00s [ 97/145] git-core-doc-0:2.51.0-2.fc43. 100% | 33.7 MiB/s | 3.0 MiB | 00m00s [ 98/145] perl-Getopt-Long-1:2.58-520.f 100% | 3.7 MiB/s | 63.6 KiB | 00m00s [ 99/145] perl-Git-0:2.51.0-2.fc43.noar 100% | 953.7 KiB/s | 38.1 KiB | 00m00s [100/145] perl-IPC-Open3-0:1.24-520.fc4 100% | 1.4 MiB/s | 23.9 KiB | 00m00s [101/145] perl-TermReadKey-0:2.38-26.fc 100% | 1.1 MiB/s | 35.2 KiB | 00m00s [102/145] perl-lib-0:0.65-520.fc43.x86_ 100% | 415.3 KiB/s | 15.0 KiB | 00m00s [103/145] perl-Class-Struct-0:0.68-520. 100% | 1.3 MiB/s | 22.1 KiB | 00m00s [104/145] less-0:679-2.fc43.x86_64 100% | 3.7 MiB/s | 195.3 KiB | 00m00s [105/145] openssh-clients-0:10.0p1-5.fc 100% | 8.4 MiB/s | 746.7 KiB | 00m00s [106/145] perl-Pod-Usage-4:2.05-520.fc4 100% | 2.5 MiB/s | 40.5 KiB | 00m00s [107/145] perl-Text-ParseWords-0:3.31-5 100% | 961.6 KiB/s | 16.3 KiB | 00m00s [108/145] perl-Error-1:0.17030-2.fc43.n 100% | 1.1 MiB/s | 40.2 KiB | 00m00s [109/145] rocm-llvm-0:19-14.rocm6.4.2.f 100% | 14.8 MiB/s | 13.1 MiB | 00m01s [110/145] libedit-0:3.1-56.20250104cvs. 100% | 3.0 MiB/s | 105.2 KiB | 00m00s [111/145] libfido2-0:1.16.0-3.fc43.x86_ 100% | 2.3 MiB/s | 98.5 KiB | 00m00s [112/145] perl-Pod-Perldoc-0:3.28.01-52 100% | 4.8 MiB/s | 84.3 KiB | 00m00s [113/145] openssh-0:10.0p1-5.fc43.x86_6 100% | 5.6 MiB/s | 339.6 KiB | 00m00s [114/145] perl-podlators-1:6.0.2-520.fc 100% | 3.7 MiB/s | 128.3 KiB | 00m00s [115/145] libcbor-0:0.12.0-6.fc43.x86_6 100% | 859.2 KiB/s | 33.5 KiB | 00m00s [116/145] perl-File-Temp-1:0.231.100-52 100% | 3.4 MiB/s | 59.0 KiB | 00m00s [117/145] perl-HTTP-Tiny-0:0.090-521.fc 100% | 3.4 MiB/s | 56.3 KiB | 00m00s [118/145] perl-Pod-Simple-1:3.47-3.fc43 100% | 7.4 MiB/s | 219.9 KiB | 00m00s [119/145] perl-Term-ANSIColor-0:5.01-52 100% | 2.6 MiB/s | 47.6 KiB | 00m00s [120/145] perl-Term-Cap-0:1.18-520.fc43 100% | 1.4 MiB/s | 21.9 KiB | 00m00s [121/145] perl-File-Path-0:2.18-520.fc4 100% | 2.1 MiB/s | 35.1 KiB | 00m00s [122/145] perl-IO-Socket-SSL-0:2.095-2. 100% | 13.3 MiB/s | 231.5 KiB | 00m00s [123/145] groff-base-0:1.23.0-10.fc43.x 100% | 6.8 MiB/s | 1.1 MiB | 00m00s [124/145] perl-Pod-Escapes-1:1.07-520.f 100% | 1.2 MiB/s | 19.8 KiB | 00m00s [125/145] perl-Text-Tabs+Wrap-0:2024.00 100% | 1.3 MiB/s | 21.6 KiB | 00m00s [126/145] perl-Net-SSLeay-0:1.94-11.fc4 100% | 8.0 MiB/s | 374.8 KiB | 00m00s [127/145] perl-AutoLoader-0:5.74-520.fc 100% | 1.3 MiB/s | 21.2 KiB | 00m00s [128/145] ncurses-0:6.5-7.20250614.fc43 100% | 9.9 MiB/s | 426.2 KiB | 00m00s [129/145] python3-0:3.14.0~rc3-1.fc43.x 100% | 460.0 KiB/s | 27.6 KiB | 00m00s [130/145] mpdecimal-0:4.0.1-2.fc43.x86_ 100% | 5.9 MiB/s | 97.1 KiB | 00m00s [131/145] python-pip-wheel-0:25.1.1-18. 100% | 12.4 MiB/s | 1.2 MiB | 00m00s [132/145] tzdata-0:2025b-3.fc43.noarch 100% | 7.7 MiB/s | 713.9 KiB | 00m00s [133/145] zlib-ng-compat-devel-0:2.2.5- 100% | 1.1 MiB/s | 38.3 KiB | 00m00s [134/145] perl-Encode-4:3.21-520.fc43.x 100% | 5.9 MiB/s | 1.1 MiB | 00m00s [135/145] perl-Storable-1:3.37-521.fc43 100% | 4.0 MiB/s | 98.5 KiB | 00m00s [136/145] libstdc++-devel-0:15.2.1-2.fc 100% | 53.4 MiB/s | 5.3 MiB | 00m00s [137/145] python3-libs-0:3.14.0~rc3-1.f 100% | 15.0 MiB/s | 9.8 MiB | 00m01s [138/145] libxcrypt-devel-0:4.4.38-8.fc 100% | 1.1 MiB/s | 29.2 KiB | 00m00s [139/145] glibc-devel-0:2.42-4.fc43.x86 100% | 5.5 MiB/s | 565.9 KiB | 00m00s [140/145] annobin-plugin-gcc-0:12.99-1. 100% | 48.6 MiB/s | 996.0 KiB | 00m00s [141/145] gcc-plugin-annobin-0:15.2.1-2 100% | 2.1 MiB/s | 57.1 KiB | 00m00s [142/145] annobin-docs-0:12.99-1.fc43.n 100% | 5.5 MiB/s | 89.5 KiB | 00m00s [143/145] cmake-rpm-macros-0:3.31.6-4.f 100% | 423.0 KiB/s | 14.8 KiB | 00m00s [144/145] kernel-headers-0:6.17.0-63.fc 100% | 7.4 MiB/s | 1.7 MiB | 00m00s [145/145] rocsolver-0:6.4.2-3.fc43.x86_ 100% | 90.3 MiB/s | 881.2 MiB | 00m10s -------------------------------------------------------------------------------- [145/145] Total 100% | 127.8 MiB/s | 1.6 GiB | 00m13s Running transaction [ 1/147] Verify package files 100% | 24.0 B/s | 145.0 B | 00m06s [ 2/147] Prepare transaction 100% | 1.0 KiB/s | 145.0 B | 00m00s [ 3/147] Installing cmake-filesystem-0 100% | 7.4 MiB/s | 7.6 KiB | 00m00s [ 4/147] Installing libmpc-0:1.3.1-8.f 100% | 158.3 MiB/s | 162.1 KiB | 00m00s [ 5/147] Installing expat-0:2.7.2-1.fc 100% | 21.0 MiB/s | 300.7 KiB | 00m00s [ 6/147] Installing rocm-llvm-filesyst 100% | 6.2 MiB/s | 19.1 KiB | 00m00s [ 7/147] Installing rocm-libc++-0:19-1 100% | 45.6 MiB/s | 1.2 MiB | 00m00s [ 8/147] Installing rocm-llvm-libs-0:1 100% | 72.5 MiB/s | 84.8 MiB | 00m01s [ 9/147] Installing rocm-clang-libs-0: 100% | 73.7 MiB/s | 98.4 MiB | 00m01s [ 10/147] Installing numactl-libs-0:2.0 100% | 56.4 MiB/s | 57.8 KiB | 00m00s [ 11/147] Installing make-1:4.4.1-11.fc 100% | 94.7 MiB/s | 1.8 MiB | 00m00s [ 12/147] Installing rocm-comgr-0:19-14 100% | 69.5 MiB/s | 123.9 MiB | 00m02s [ 13/147] Installing go-filesystem-0:3. 100% | 0.0 B/s | 392.0 B | 00m00s [ 14/147] Installing rocm-lld-0:19-14.r 100% | 65.3 MiB/s | 5.7 MiB | 00m00s [ 15/147] Installing rocm-libc++-devel- 100% | 94.5 MiB/s | 7.7 MiB | 00m00s [ 16/147] Installing cpp-0:15.2.1-2.fc4 100% | 324.4 MiB/s | 38.0 MiB | 00m00s [ 17/147] Installing hipblas-common-dev 100% | 17.4 MiB/s | 17.8 KiB | 00m00s [ 18/147] Installing zlib-ng-compat-dev 100% | 106.0 MiB/s | 108.5 KiB | 00m00s [ 19/147] Installing annobin-docs-0:12. 100% | 32.6 MiB/s | 100.1 KiB | 00m00s [ 20/147] Installing kernel-headers-0:6 100% | 202.2 MiB/s | 6.9 MiB | 00m00s [ 21/147] Installing glibc-devel-0:2.42 100% | 168.1 MiB/s | 2.4 MiB | 00m00s [ 22/147] Installing libxcrypt-devel-0: 100% | 16.2 MiB/s | 33.1 KiB | 00m00s [ 23/147] Installing gcc-0:15.2.1-2.fc4 100% | 371.8 MiB/s | 111.9 MiB | 00m00s [ 24/147] Installing libstdc++-devel-0: 100% | 430.9 MiB/s | 37.5 MiB | 00m00s [ 25/147] Installing tzdata-0:2025b-3.f 100% | 63.1 MiB/s | 1.9 MiB | 00m00s [ 26/147] Installing python-pip-wheel-0 100% | 622.6 MiB/s | 1.2 MiB | 00m00s [ 27/147] Installing mpdecimal-0:4.0.1- 100% | 35.6 MiB/s | 218.8 KiB | 00m00s [ 28/147] Installing python3-libs-0:3.1 100% | 321.1 MiB/s | 43.3 MiB | 00m00s [ 29/147] Installing python3-0:3.14.0~r 100% | 2.1 MiB/s | 30.7 KiB | 00m00s [ 30/147] Installing cmake-rpm-macros-0 100% | 8.1 MiB/s | 8.3 KiB | 00m00s [ 31/147] Installing python3-zstarfile- 100% | 29.0 MiB/s | 29.7 KiB | 00m00s [ 32/147] Installing python3-boolean.py 100% | 210.4 MiB/s | 646.4 KiB | 00m00s [ 33/147] Installing python3-license-ex 100% | 390.9 MiB/s | 1.2 MiB | 00m00s [ 34/147] Installing ncurses-0:6.5-7.20 100% | 28.7 MiB/s | 616.4 KiB | 00m00s [ 35/147] Installing groff-base-0:1.23. 100% | 116.5 MiB/s | 3.8 MiB | 00m00s [ 36/147] Installing perl-Digest-0:1.20 100% | 36.2 MiB/s | 37.1 KiB | 00m00s [ 37/147] Installing perl-FileHandle-0: 100% | 0.0 B/s | 9.8 KiB | 00m00s [ 38/147] Installing perl-Digest-MD5-0: 100% | 60.1 MiB/s | 61.6 KiB | 00m00s [ 39/147] Installing perl-B-0:1.89-520. 100% | 246.4 MiB/s | 504.7 KiB | 00m00s [ 40/147] Installing perl-libnet-0:3.15 100% | 143.9 MiB/s | 294.7 KiB | 00m00s [ 41/147] Installing perl-Data-Dumper-0 100% | 114.8 MiB/s | 117.5 KiB | 00m00s [ 42/147] Installing perl-MIME-Base32-0 100% | 0.0 B/s | 32.2 KiB | 00m00s [ 43/147] Installing perl-AutoLoader-0: 100% | 0.0 B/s | 21.0 KiB | 00m00s [ 44/147] Installing perl-IO-Socket-IP- 100% | 99.8 MiB/s | 102.2 KiB | 00m00s [ 45/147] Installing perl-URI-0:5.34-1. 100% | 91.7 MiB/s | 281.8 KiB | 00m00s [ 46/147] Installing perl-Net-SSLeay-0: 100% | 271.7 MiB/s | 1.4 MiB | 00m00s [ 47/147] Installing perl-IO-Socket-SSL 100% | 350.9 MiB/s | 718.6 KiB | 00m00s [ 48/147] Installing perl-Text-Tabs+Wra 100% | 0.0 B/s | 23.9 KiB | 00m00s [ 49/147] Installing perl-Pod-Escapes-1 100% | 0.0 B/s | 25.9 KiB | 00m00s [ 50/147] Installing perl-File-Path-0:2 100% | 0.0 B/s | 64.5 KiB | 00m00s [ 51/147] Installing perl-locale-0:1.13 100% | 0.0 B/s | 6.5 KiB | 00m00s [ 52/147] Installing perl-if-0:0.61.000 100% | 0.0 B/s | 6.2 KiB | 00m00s [ 53/147] Installing perl-Time-Local-2: 100% | 68.9 MiB/s | 70.6 KiB | 00m00s [ 54/147] Installing perl-Pod-Simple-1: 100% | 280.7 MiB/s | 574.9 KiB | 00m00s [ 55/147] Installing perl-HTTP-Tiny-0:0 100% | 152.8 MiB/s | 156.4 KiB | 00m00s [ 56/147] Installing perl-File-Temp-1:0 100% | 160.2 MiB/s | 164.1 KiB | 00m00s [ 57/147] Installing perl-Term-Cap-0:1. 100% | 0.0 B/s | 30.6 KiB | 00m00s [ 58/147] Installing perl-Term-ANSIColo 100% | 96.9 MiB/s | 99.2 KiB | 00m00s [ 59/147] Installing perl-Class-Struct- 100% | 0.0 B/s | 25.9 KiB | 00m00s [ 60/147] Installing perl-IPC-Open3-0:1 100% | 0.0 B/s | 28.5 KiB | 00m00s [ 61/147] Installing perl-POSIX-0:2.23- 100% | 227.2 MiB/s | 232.6 KiB | 00m00s [ 62/147] Installing perl-podlators-1:6 100% | 22.4 MiB/s | 321.4 KiB | 00m00s [ 63/147] Installing perl-Pod-Perldoc-0 100% | 11.8 MiB/s | 169.2 KiB | 00m00s [ 64/147] Installing perl-File-stat-0:1 100% | 12.8 MiB/s | 13.1 KiB | 00m00s [ 65/147] Installing perl-SelectSaver-0 100% | 0.0 B/s | 2.6 KiB | 00m00s [ 66/147] Installing perl-Symbol-0:1.09 100% | 0.0 B/s | 7.3 KiB | 00m00s [ 67/147] Installing perl-Socket-4:2.04 100% | 119.4 MiB/s | 122.3 KiB | 00m00s [ 68/147] Installing perl-Pod-Usage-4:2 100% | 6.6 MiB/s | 87.9 KiB | 00m00s [ 69/147] Installing perl-Text-ParseWor 100% | 14.2 MiB/s | 14.6 KiB | 00m00s [ 70/147] Installing perl-IO-0:1.55-520 100% | 148.1 MiB/s | 151.7 KiB | 00m00s [ 71/147] Installing perl-Fcntl-0:1.20- 100% | 0.0 B/s | 49.9 KiB | 00m00s [ 72/147] Installing perl-overloading-0 100% | 0.0 B/s | 5.6 KiB | 00m00s [ 73/147] Installing perl-mro-0:1.29-52 100% | 41.7 MiB/s | 42.7 KiB | 00m00s [ 74/147] Installing perl-base-0:2.27-5 100% | 0.0 B/s | 13.0 KiB | 00m00s [ 75/147] Installing perl-File-Basename 100% | 0.0 B/s | 14.6 KiB | 00m00s [ 76/147] Installing perl-Getopt-Long-1 100% | 143.8 MiB/s | 147.2 KiB | 00m00s [ 77/147] Installing perl-Storable-1:3. 100% | 227.4 MiB/s | 232.8 KiB | 00m00s [ 78/147] Installing perl-vars-0:1.05-5 100% | 0.0 B/s | 4.3 KiB | 00m00s [ 79/147] Installing perl-overload-0:1. 100% | 0.0 B/s | 72.0 KiB | 00m00s [ 80/147] Installing perl-parent-1:0.24 100% | 0.0 B/s | 11.0 KiB | 00m00s [ 81/147] Installing perl-MIME-Base64-0 100% | 43.2 MiB/s | 44.3 KiB | 00m00s [ 82/147] Installing perl-Errno-0:1.38- 100% | 0.0 B/s | 8.8 KiB | 00m00s [ 83/147] Installing perl-constant-0:1. 100% | 0.0 B/s | 27.4 KiB | 00m00s [ 84/147] Installing perl-Scalar-List-U 100% | 145.2 MiB/s | 148.7 KiB | 00m00s [ 85/147] Installing perl-Getopt-Std-0: 100% | 11.5 MiB/s | 11.8 KiB | 00m00s [ 86/147] Installing perl-Encode-4:3.21 100% | 187.8 MiB/s | 4.7 MiB | 00m00s [ 87/147] Installing perl-DynaLoader-0: 100% | 31.7 MiB/s | 32.5 KiB | 00m00s [ 88/147] Installing perl-PathTools-0:3 100% | 90.1 MiB/s | 184.6 KiB | 00m00s [ 89/147] Installing perl-Exporter-0:5. 100% | 54.3 MiB/s | 55.6 KiB | 00m00s [ 90/147] Installing perl-Carp-0:1.54-5 100% | 23.3 MiB/s | 47.7 KiB | 00m00s [ 91/147] Installing perl-libs-4:5.42.0 100% | 291.2 MiB/s | 11.6 MiB | 00m00s [ 92/147] Installing perl-interpreter-4 100% | 9.0 MiB/s | 120.3 KiB | 00m00s [ 93/147] Installing perl-File-Copy-0:2 100% | 0.0 B/s | 20.2 KiB | 00m00s [ 94/147] Installing perl-File-Which-0: 100% | 0.0 B/s | 31.4 KiB | 00m00s [ 95/147] Installing perl-TermReadKey-0 100% | 64.6 MiB/s | 66.2 KiB | 00m00s [ 96/147] Installing perl-lib-0:0.65-52 100% | 0.0 B/s | 8.9 KiB | 00m00s [ 97/147] Installing perl-Error-1:0.170 100% | 78.1 MiB/s | 80.0 KiB | 00m00s [ 98/147] Installing libcbor-0:0.12.0-6 100% | 77.3 MiB/s | 79.2 KiB | 00m00s [ 99/147] Installing libfido2-0:1.16.0- 100% | 234.4 MiB/s | 240.0 KiB | 00m00s [100/147] Installing openssh-0:10.0p1-5 100% | 87.0 MiB/s | 1.4 MiB | 00m00s [101/147] Installing libedit-0:3.1-56.2 100% | 118.1 MiB/s | 241.8 KiB | 00m00s [102/147] Installing openssh-clients-0: 100% | 108.7 MiB/s | 2.6 MiB | 00m00s [103/147] Installing less-0:679-2.fc43. 100% | 26.7 MiB/s | 409.4 KiB | 00m00s [104/147] Installing git-core-0:2.51.0- 100% | 342.8 MiB/s | 23.7 MiB | 00m00s [105/147] Installing git-core-doc-0:2.5 100% | 357.8 MiB/s | 17.9 MiB | 00m00s [106/147] Installing git-0:2.51.0-2.fc4 100% | 56.4 MiB/s | 57.7 KiB | 00m00s [107/147] Installing perl-Git-0:2.51.0- 100% | 0.0 B/s | 65.4 KiB | 00m00s [108/147] Installing hwdata-0:0.399-1.f 100% | 479.8 MiB/s | 9.6 MiB | 00m00s [109/147] Installing libpciaccess-0:0.1 100% | 44.8 MiB/s | 45.9 KiB | 00m00s [110/147] Installing libdrm-0:2.4.125-2 100% | 195.1 MiB/s | 399.7 KiB | 00m00s [111/147] Installing rocm-runtime-0:6.4 100% | 439.3 MiB/s | 3.1 MiB | 00m00s [112/147] Installing rocm-runtime-devel 100% | 280.7 MiB/s | 574.9 KiB | 00m00s [113/147] Installing rocm-llvm-0:19-14. 100% | 65.5 MiB/s | 48.5 MiB | 00m01s [114/147] Installing rocm-llvm-devel-0: 100% | 89.2 MiB/s | 25.7 MiB | 00m00s [115/147] Installing rocm-llvm-static-0 100% | 93.5 MiB/s | 1.8 GiB | 00m20s [116/147] Installing rocm-clang-runtime 100% | 131.0 MiB/s | 7.9 MiB | 00m00s [117/147] Installing rocm-clang-0:19-14 100% | 75.9 MiB/s | 70.2 MiB | 00m01s [118/147] Installing rocm-clang-devel-0 100% | 117.3 MiB/s | 23.5 MiB | 00m00s [119/147] Installing rocm-device-libs-0 100% | 89.2 MiB/s | 3.2 MiB | 00m00s [120/147] Installing rocm-comgr-devel-0 100% | 97.3 MiB/s | 99.6 KiB | 00m00s [121/147] Installing hipcc-0:19-14.rocm 100% | 32.0 MiB/s | 654.3 KiB | 00m00s [122/147] Installing rocm-hip-0:6.4.2-2 100% | 332.6 MiB/s | 24.9 MiB | 00m00s [123/147] Installing rocblas-0:6.4.2-4. 100% | 132.8 MiB/s | 3.9 GiB | 00m30s [124/147] Installing rocsolver-0:6.4.2- 100% | 31.2 MiB/s | 987.8 MiB | 00m32s [125/147] Installing hipblas-0:6.4.1-4. 100% | 16.9 MiB/s | 1.1 MiB | 00m00s [126/147] Installing rocm-hip-devel-0:6 100% | 33.8 MiB/s | 2.8 MiB | 00m00s [127/147] Installing vim-filesystem-2:9 100% | 471.9 KiB/s | 4.7 KiB | 00m00s [128/147] Installing emacs-filesystem-1 100% | 17.7 KiB/s | 544.0 B | 00m00s [129/147] Installing golang-src-0:1.25. 100% | 260.0 MiB/s | 82.4 MiB | 00m00s [130/147] Installing golang-0:1.25.1-2. 100% | 23.4 MiB/s | 9.6 MiB | 00m00s [131/147] Installing golang-bin-0:1.25. 100% | 320.2 MiB/s | 67.2 MiB | 00m00s [132/147] Installing rhash-0:1.4.5-3.fc 100% | 12.0 MiB/s | 356.4 KiB | 00m00s [133/147] Installing libuv-1:1.51.0-2.f 100% | 46.6 MiB/s | 573.0 KiB | 00m00s [134/147] Installing jsoncpp-0:1.9.6-2. 100% | 23.0 MiB/s | 259.2 KiB | 00m00s [135/147] Installing cmake-0:3.31.6-4.f 100% | 103.3 MiB/s | 34.5 MiB | 00m00s [136/147] Installing cmake-data-0:3.31. 100% | 72.0 MiB/s | 9.1 MiB | 00m00s [137/147] Installing kmod-0:34.2-2.fc43 100% | 8.8 MiB/s | 253.1 KiB | 00m00s [138/147] Installing golist-0:0.10.4-8. 100% | 21.3 MiB/s | 4.5 MiB | 00m00s [139/147] Installing go-rpm-macros-0:3. 100% | 1.0 MiB/s | 99.5 KiB | 00m00s [140/147] Installing rocminfo-0:6.4.0-2 100% | 4.0 MiB/s | 78.7 KiB | 00m00s [141/147] Installing rocblas-devel-0:6. 100% | 164.1 MiB/s | 2.8 MiB | 00m00s [142/147] Installing hipblas-devel-0:6. 100% | 135.4 MiB/s | 3.1 MiB | 00m00s [143/147] Installing go-vendor-tools-0: 100% | 16.8 MiB/s | 360.7 KiB | 00m00s [144/147] Installing gcc-c++-0:15.2.1-2 100% | 297.6 MiB/s | 41.4 MiB | 00m00s [145/147] Installing annobin-plugin-gcc 100% | 41.1 MiB/s | 1.0 MiB | 00m00s [146/147] Installing gcc-plugin-annobin 100% | 2.9 MiB/s | 58.6 KiB | 00m00s [147/147] Installing systemd-rpm-macros 100% | 13.0 KiB/s | 8.9 KiB | 00m01s Complete! Finish: build setup for ollama-0.12.3-1.fc43.src.rpm Start: rpmbuild ollama-0.12.3-1.fc43.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%mkbuilddir): /bin/sh -e /var/tmp/rpm-tmp.zaKYMR Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.ljPIRG + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd /builddir/build/BUILD/ollama-0.12.3-build + rm -rf ollama-0.12.3 + /usr/lib/rpm/rpmuncompress -x /builddir/build/SOURCES/ollama-0.12.3.tar.gz + STATUS=0 + '[' 0 -ne 0 ']' + cd ollama-0.12.3 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + rm -fr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/vendor + [[ ! -e /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin ]] + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin' + export GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + [[ ! -e /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama ]] ++ dirname /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama' + ln -fs /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + /usr/lib/rpm/rpmuncompress -x /builddir/build/SOURCES/vendor.tar.bz2 + STATUS=0 + '[' 0 -ne 0 ']' + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/remove-runtime-for-cuda-and-rocm.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/replace-library-paths.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/vendor-pdevine-tensor-fix-cannonical-import-paths.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + cp /builddir/build/SOURCES/LICENSE.sentencepiece convert/sentencepiece/LICENSE + RPM_EC=0 ++ jobs -p + exit 0 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.Ad6E9C + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc43.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: Additional repo https_developer_downlo 100% | 15.6 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 15.6 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 6.0 KiB/s | 1.5 KiB | 00m00s fedora 100% | 63.7 KiB/s | 28.2 KiB | 00m00s updates 100% | 81.3 KiB/s | 30.2 KiB | 00m00s Repositories loaded. Package "cmake-3.31.6-4.fc43.x86_64" is already installed. Package "gcc-c++-15.2.1-2.fc43.x86_64" is already installed. Package "go-rpm-macros-3.8.0-1.fc43.x86_64" is already installed. Package "go-vendor-tools-0.8.0-5.fc43.noarch" is already installed. Package "hipblas-devel-6.4.1-4.fc43.x86_64" is already installed. Package "rocblas-devel-6.4.2-4.fc43.x86_64" is already installed. Package "rocm-comgr-devel-19-14.rocm6.4.2.fc43.x86_64" is already installed. Package "rocm-hip-devel-6.4.2-2.fc43.x86_64" is already installed. Package "rocm-runtime-devel-6.4.2-2.fc43.x86_64" is already installed. Package "rocminfo-6.4.0-2.fc43.x86_64" is already installed. Package "systemd-rpm-macros-258-1.fc43.noarch" is already installed. Total size of inbound packages is 2 MiB. Need to download 2 MiB. After this operation, 5 MiB extra will be used (install 5 MiB, remove 0 B). Package Arch Version Repository Size Installing: askalono-cli x86_64 0.5.0-3.fc43 fedora 4.6 MiB Transaction Summary: Installing: 1 package [1/1] askalono-cli-0:0.5.0-3.fc43.x86_6 100% | 2.3 MiB/s | 2.4 MiB | 00m01s -------------------------------------------------------------------------------- [1/1] Total 100% | 2.3 MiB/s | 2.4 MiB | 00m01s Running transaction [1/3] Verify package files 100% | 111.0 B/s | 1.0 B | 00m00s [2/3] Prepare transaction 100% | 37.0 B/s | 1.0 B | 00m00s [3/3] Installing askalono-cli-0:0.5.0-3 100% | 121.7 MiB/s | 4.6 MiB | 00m00s Complete! Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.4QaRYS + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc43.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: Additional repo https_developer_downlo 100% | 19.3 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 19.3 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 7.4 KiB/s | 1.5 KiB | 00m00s fedora 100% | 50.8 KiB/s | 28.2 KiB | 00m01s updates 100% | 115.1 KiB/s | 30.2 KiB | 00m00s Repositories loaded. Nothing to do. Package "askalono-cli-0.5.0-3.fc43.x86_64" is already installed. Package "cmake-3.31.6-4.fc43.x86_64" is already installed. Package "gcc-c++-15.2.1-2.fc43.x86_64" is already installed. Package "go-rpm-macros-3.8.0-1.fc43.x86_64" is already installed. Package "go-vendor-tools-0.8.0-5.fc43.noarch" is already installed. Package "hipblas-devel-6.4.1-4.fc43.x86_64" is already installed. Package "rocblas-devel-6.4.2-4.fc43.x86_64" is already installed. Package "rocm-comgr-devel-19-14.rocm6.4.2.fc43.x86_64" is already installed. Package "rocm-hip-devel-6.4.2-2.fc43.x86_64" is already installed. Package "rocm-runtime-devel-6.4.2-2.fc43.x86_64" is already installed. Package "rocminfo-6.4.0-2.fc43.x86_64" is already installed. Package "systemd-rpm-macros-258-1.fc43.noarch" is already installed. Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.UaKjKH + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.RN5nTD + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + export 'GO_LDFLAGS= -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release' + GO_LDFLAGS=' -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release' ++ echo ollama-0.12.3-1.fc43-1759536000 ++ sha1sum ++ cut -d ' ' -f1 + GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + GO111MODULE=on + go build -buildmode pie -compiler gc '-tags=rpm_crashtraceback ' -a -v -ldflags ' -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release -X github.com/ollama/ollama/version=0.12.3 -B 0xd73abbaab5cd0130ffbf90996b0d6dff94a7fc9c -compressdwarf=false -linkmode=external -extldflags '\''-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes '\''' -o /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama github.com/ollama/ollama internal/goarch internal/byteorder internal/unsafeheader internal/cpu internal/coverage/rtcov internal/abi internal/godebugs internal/goexperiment internal/goos internal/profilerecord internal/runtime/atomic internal/runtime/math internal/runtime/strconv internal/runtime/syscall internal/bytealg internal/chacha8rand internal/runtime/exithook internal/runtime/gc internal/asan internal/msan internal/runtime/cgroup internal/runtime/sys internal/stringslite internal/trace/tracev2 sync/atomic math/bits internal/itoa cmp unicode unicode/utf8 math internal/race internal/synctest crypto/internal/fips140deps/byteorder internal/runtime/maps internal/sync crypto/internal/fips140deps/cpu crypto/internal/fips140/alias crypto/internal/fips140/subtle crypto/internal/boring/sig encoding unicode/utf16 github.com/rivo/uniseg internal/nettrace vendor/golang.org/x/crypto/cryptobyte/asn1 golang.org/x/crypto/internal/alias log/internal runtime log/slog/internal github.com/ollama/ollama/version container/list vendor/golang.org/x/crypto/internal/alias golang.org/x/text/encoding/internal/identifier golang.org/x/text/internal/utf8internal github.com/ollama/ollama/fs image/color golang.org/x/image/math/f64 github.com/gin-gonic/gin/internal/bytesconv golang.org/x/net/html/atom github.com/go-playground/locales/currency github.com/leodido/go-urn/scim/schema github.com/pelletier/go-toml/v2/internal/characters google.golang.org/protobuf/internal/flags github.com/d4l3k/go-bfloat16 google.golang.org/protobuf/internal/set github.com/apache/arrow/go/arrow/internal/debug golang.org/x/xerrors/internal github.com/chewxy/math32 gorgonia.org/vecf64 math/cmplx gonum.org/v1/gonum/blas gonum.org/v1/gonum/internal/asm/c128 gorgonia.org/vecf32 gonum.org/v1/gonum/internal/math32 gonum.org/v1/gonum/internal/asm/f64 gonum.org/v1/gonum/lapack gonum.org/v1/gonum/internal/cmplx64 gonum.org/v1/gonum/internal/asm/c64 gonum.org/v1/gonum/internal/asm/f32 gonum.org/v1/gonum/mathext/internal/amos gonum.org/v1/gonum/mathext/internal/gonum gonum.org/v1/gonum/mathext/internal/cephes github.com/ollama/ollama/server/internal/internal/stringsx github.com/agnivade/levenshtein gonum.org/v1/gonum/mathext internal/reflectlite iter weak sync crypto/subtle slices maps sort errors internal/bisect internal/testlog internal/oserror syscall io internal/godebug strconv bytes strings hash crypto/internal/fips140deps/godebug crypto path math/rand/v2 crypto/internal/fips140cache bufio crypto/internal/fips140 crypto/internal/impl crypto/internal/fips140/sha3 crypto/internal/fips140/sha256 time crypto/internal/fips140/sha512 internal/syscall/execenv internal/syscall/unix crypto/internal/randutil reflect math/rand crypto/internal/fips140/hmac crypto/internal/fips140/check encoding/base64 crypto/internal/fips140/aes crypto/internal/fips140/edwards25519/field encoding/pem crypto/internal/fips140/edwards25519 regexp/syntax context io/fs internal/poll internal/filepathlite regexp vendor/golang.org/x/net/dns/dnsmessage os internal/singleflight unique runtime/cgo net/netip internal/fmtsort encoding/binary crypto/internal/sysrand fmt golang.org/x/sys/unix crypto/internal/entropy crypto/internal/fips140/drbg crypto/internal/fips140/ed25519 crypto/internal/fips140only crypto/internal/fips140/aes/gcm crypto/cipher math/big crypto/internal/boring encoding/json github.com/containerd/console github.com/mattn/go-runewidth encoding/csv crypto/md5 crypto/rand crypto/ed25519 github.com/olekukonko/tablewriter crypto/sha1 database/sql/driver encoding/hex crypto/aes crypto/des crypto/dsa crypto/internal/fips140/nistec/fiat crypto/internal/boring/bbig crypto/internal/fips140/bigmod crypto/sha3 crypto/internal/fips140hash crypto/sha512 encoding/asn1 crypto/hmac crypto/rc4 crypto/internal/fips140/rsa vendor/golang.org/x/crypto/cryptobyte crypto/rsa crypto/sha256 crypto/x509/pkix crypto/internal/fips140/nistec net/url path/filepath golang.org/x/crypto/chacha20 golang.org/x/crypto/internal/poly1305 golang.org/x/crypto/blowfish log golang.org/x/crypto/ssh/internal/bcrypt_pbkdf net log/slog/internal/buffer github.com/ollama/ollama/format compress/flate log/slog hash/crc32 compress/gzip crypto/internal/fips140/ecdh crypto/elliptic crypto/internal/fips140/ecdsa crypto/ecdh golang.org/x/crypto/curve25519 github.com/ollama/ollama/types/model crypto/ecdsa crypto/internal/fips140/hkdf crypto/hkdf crypto/internal/fips140/mlkem crypto/internal/fips140/tls12 crypto/internal/fips140/tls13 vendor/golang.org/x/crypto/chacha20 vendor/golang.org/x/crypto/internal/poly1305 vendor/golang.org/x/sys/cpu crypto/fips140 vendor/golang.org/x/crypto/chacha20poly1305 crypto/tls/internal/fips140tls vendor/golang.org/x/text/transform crypto/internal/hpke vendor/golang.org/x/text/unicode/bidi vendor/golang.org/x/text/unicode/norm vendor/golang.org/x/net/http2/hpack mime vendor/golang.org/x/text/secure/bidirule mime/quotedprintable net/http/internal net/http/internal/ascii golang.org/x/sync/errgroup golang.org/x/text/transform vendor/golang.org/x/net/idna golang.org/x/text/encoding golang.org/x/text/runes golang.org/x/text/encoding/internal os/user golang.org/x/text/encoding/unicode golang.org/x/term github.com/emirpasic/gods/v2/utils github.com/emirpasic/gods/v2/containers github.com/emirpasic/gods/v2/lists github.com/ollama/ollama/progress github.com/emirpasic/gods/v2/lists/arraylist flag github.com/ollama/ollama/readline embed github.com/ollama/ollama/llama/llama.cpp/common github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 github.com/google/uuid crypto/x509 github.com/ollama/ollama/envconfig net/textproto vendor/golang.org/x/net/http/httpguts vendor/golang.org/x/net/http/httpproxy mime/multipart github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile golang.org/x/crypto/ssh github.com/ollama/ollama/auth crypto/tls github.com/ollama/ollama/llama/llama.cpp/tools/mtmd net/http/httptrace net/http/internal/httpcommon net/http github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu github.com/ollama/ollama/api github.com/ollama/ollama/parser github.com/ollama/ollama/discover github.com/ollama/ollama/fs/util/bufioutil github.com/ollama/ollama/fs/ggml github.com/ollama/ollama/logutil hash/maphash github.com/ollama/ollama/ml container/heap github.com/dlclark/regexp2/syntax github.com/dlclark/regexp2 github.com/emirpasic/gods/v2/trees github.com/emirpasic/gods/v2/trees/binaryheap github.com/ollama/ollama/model/input github.com/ollama/ollama/kvcache github.com/ollama/ollama/ml/nn/rope github.com/ollama/ollama/ml/nn/pooling image golang.org/x/image/bmp hash/adler32 compress/zlib golang.org/x/image/ccitt golang.org/x/image/tiff/lzw golang.org/x/image/tiff io/ioutil golang.org/x/image/riff golang.org/x/image/vp8 golang.org/x/image/vp8l golang.org/x/image/webp image/internal/imageutil image/jpeg image/png golang.org/x/sync/semaphore os/exec github.com/ollama/ollama/runner/common github.com/ollama/ollama/ml/nn github.com/ollama/ollama/ml/nn/fast image/draw golang.org/x/image/draw github.com/ollama/ollama/model/imageproc encoding/xml github.com/gin-contrib/sse github.com/gin-gonic/gin/internal/json golang.org/x/net/html github.com/gabriel-vasile/mimetype/internal/charset debug/dwarf internal/saferio debug/macho github.com/gabriel-vasile/mimetype/internal/json github.com/gabriel-vasile/mimetype/internal/magic github.com/gabriel-vasile/mimetype github.com/go-playground/locales github.com/go-playground/universal-translator github.com/leodido/go-urn golang.org/x/sys/cpu golang.org/x/crypto/sha3 golang.org/x/text/internal/tag golang.org/x/text/internal/language golang.org/x/text/internal/language/compact golang.org/x/text/language github.com/go-playground/validator/v10 github.com/pelletier/go-toml/v2/internal/danger github.com/pelletier/go-toml/v2/unstable github.com/pelletier/go-toml/v2/internal/tracker github.com/pelletier/go-toml/v2 encoding/gob go/token html text/template/parse text/template html/template net/rpc github.com/ugorji/go/codec hash/fnv google.golang.org/protobuf/internal/detrand google.golang.org/protobuf/internal/errors google.golang.org/protobuf/encoding/protowire google.golang.org/protobuf/internal/pragma google.golang.org/protobuf/reflect/protoreflect google.golang.org/protobuf/internal/encoding/messageset google.golang.org/protobuf/internal/genid google.golang.org/protobuf/internal/order google.golang.org/protobuf/internal/strs google.golang.org/protobuf/reflect/protoregistry google.golang.org/protobuf/runtime/protoiface google.golang.org/protobuf/proto gopkg.in/yaml.v3 github.com/gin-gonic/gin/binding github.com/gin-gonic/gin/render github.com/mattn/go-isatty golang.org/x/text/unicode/bidi golang.org/x/text/secure/bidirule golang.org/x/text/unicode/norm golang.org/x/net/idna golang.org/x/net/http/httpguts golang.org/x/net/http2/hpack golang.org/x/net/internal/httpcommon golang.org/x/net/http2 golang.org/x/net/http2/h2c net/http/httputil github.com/gin-gonic/gin github.com/gin-contrib/cors archive/tar archive/zip golang.org/x/text/encoding/unicode/utf32 github.com/nlpodyssey/gopickle/types github.com/nlpodyssey/gopickle/pickle github.com/nlpodyssey/gopickle/pytorch google.golang.org/protobuf/internal/descfmt google.golang.org/protobuf/internal/descopts google.golang.org/protobuf/internal/editiondefaults google.golang.org/protobuf/internal/encoding/text google.golang.org/protobuf/internal/encoding/defval google.golang.org/protobuf/internal/filedesc google.golang.org/protobuf/encoding/prototext google.golang.org/protobuf/internal/encoding/tag google.golang.org/protobuf/internal/impl google.golang.org/protobuf/internal/filetype google.golang.org/protobuf/internal/version google.golang.org/protobuf/runtime/protoimpl github.com/ollama/ollama/convert/sentencepiece github.com/apache/arrow/go/arrow/endian github.com/apache/arrow/go/arrow/internal/cpu github.com/apache/arrow/go/arrow/memory github.com/apache/arrow/go/arrow/bitutil github.com/apache/arrow/go/arrow/decimal128 github.com/apache/arrow/go/arrow/float16 golang.org/x/xerrors github.com/apache/arrow/go/arrow github.com/apache/arrow/go/arrow/array github.com/apache/arrow/go/arrow/tensor github.com/pkg/errors github.com/xtgo/set github.com/chewxy/hm github.com/google/flatbuffers/go github.com/pdevine/tensor/internal/storage github.com/pdevine/tensor/internal/execution github.com/ollama/ollama/ml/backend/ggml/ggml/src github.com/pdevine/tensor/internal/serialization/fb github.com/gogo/protobuf/proto google.golang.org/protobuf/types/descriptorpb google.golang.org/protobuf/internal/editionssupport google.golang.org/protobuf/types/gofeaturespb google.golang.org/protobuf/reflect/protodesc go4.org/unsafe/assume-no-moving-gc gonum.org/v1/gonum/blas/gonum github.com/gogo/protobuf/protoc-gen-gogo/descriptor github.com/golang/protobuf/proto github.com/gogo/protobuf/gogoproto gonum.org/v1/gonum/floats/scalar gonum.org/v1/gonum/floats github.com/pdevine/tensor/internal/serialization/pb github.com/x448/float16 golang.org/x/exp/rand gonum.org/v1/gonum/stat/combin github.com/ollama/ollama/fs/gguf github.com/ollama/ollama/harmony github.com/ollama/ollama/model/parsers github.com/ollama/ollama/model/renderers github.com/ollama/ollama/openai github.com/ollama/ollama/server/internal/internal/names github.com/ollama/ollama/server/internal/cache/blob gonum.org/v1/gonum/blas/blas64 gonum.org/v1/gonum/blas/cblas128 runtime/debug gonum.org/v1/gonum/lapack/gonum github.com/ollama/ollama/server/internal/internal/backoff github.com/ollama/ollama/server/internal/client/ollama github.com/ollama/ollama/template github.com/ollama/ollama/server/internal/registry github.com/ollama/ollama/thinking github.com/ollama/ollama/tools github.com/ollama/ollama/types/errtypes os/signal github.com/ollama/ollama/types/syncmap github.com/spf13/pflag github.com/spf13/cobra gonum.org/v1/gonum/lapack/lapack64 gonum.org/v1/gonum/mat gonum.org/v1/gonum/stat github.com/pdevine/tensor gonum.org/v1/gonum/stat/distuv github.com/pdevine/tensor/native github.com/ollama/ollama/convert github.com/ollama/ollama/ml/backend/ggml github.com/ollama/ollama/llama/llama.cpp/src github.com/ollama/ollama/ml/backend github.com/ollama/ollama/model github.com/ollama/ollama/model/models/deepseek2 github.com/ollama/ollama/model/models/gemma2 github.com/ollama/ollama/model/models/bert github.com/ollama/ollama/model/models/gemma3 github.com/ollama/ollama/model/models/gemma3n github.com/ollama/ollama/model/models/gptoss github.com/ollama/ollama/model/models/llama github.com/ollama/ollama/model/models/llama4 github.com/ollama/ollama/model/models/mistral3 github.com/ollama/ollama/model/models/mllama github.com/ollama/ollama/model/models/qwen2 github.com/ollama/ollama/model/models/qwen25vl github.com/ollama/ollama/model/models/qwen3 github.com/ollama/ollama/model/models github.com/ollama/ollama/llama github.com/ollama/ollama/sample github.com/ollama/ollama/llm github.com/ollama/ollama/runner/ollamarunner github.com/ollama/ollama/runner/llamarunner github.com/ollama/ollama/server github.com/ollama/ollama/runner github.com/ollama/ollama/cmd github.com/ollama/ollama + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + /usr/bin/cmake -S . -B redhat-linux-build_ggml-cpu -DCMAKE_C_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_CXX_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_Fortran_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_INSTALL_DO_STRIP:BOOL=OFF -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_INSTALL_FULL_SBINDIR:PATH=/usr/bin -DCMAKE_INSTALL_SBINDIR:PATH=bin -DINCLUDE_INSTALL_DIR:PATH=/usr/include -DLIB_INSTALL_DIR:PATH=/usr/lib64 -DSYSCONF_INSTALL_DIR:PATH=/etc -DSHARE_INSTALL_PREFIX:PATH=/usr/share -DLIB_SUFFIX=64 -DBUILD_SHARED_LIBS:BOOL=ON --preset CPU Preset CMake variables: CMAKE_BUILD_TYPE="Release" CMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded" -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - /usr/lib64/rocm/llvm/bin/clang++ -- Configuring done (7.8s) -- Generating done (0.0s) CMake Warning: Manually-specified variables were not used by the project: CMAKE_Fortran_FLAGS_RELEASE CMAKE_INSTALL_DO_STRIP INCLUDE_INSTALL_DIR LIB_SUFFIX SHARE_INSTALL_PREFIX SYSCONF_INSTALL_DIR -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu + /usr/bin/cmake --build redhat-linux-build_ggml-cpu -j4 --verbose --target ggml-cpu Change Dir: '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' Run Build Command(s): /usr/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f Makefile -j4 ggml-cpu /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/gmake -f CMakeFiles/Makefile2 ggml-cpu gmake[1]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/CMakeFiles 99 /usr/bin/gmake -f CMakeFiles/Makefile2 ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/all gmake[2]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/DependInfo.cmake "--color=" /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/DependInfo.cmake "--color=" cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 1%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 1%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 3%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF CMakeFiles/ggml-base.dir/ggml.c.o.d -o CMakeFiles/ggml-base.dir/ggml.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5663:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5663 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5656:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5656 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 3%] Built target ggml-cpu-x64-feats [ 3%] Built target ggml-cpu-alderlake-feats gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o -MF CMakeFiles/ggml-base.dir/ggml.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp [ 4%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c [ 4%] Built target ggml-cpu-sse42-feats /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 5%] Built target ggml-cpu-sandybridge-feats [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 6%] Built target ggml-cpu-haswell-feats [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 7%] Built target ggml-cpu-skylakex-feats [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-threading.cpp /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 9%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 9%] Built target ggml-cpu-icelake-feats [ 9%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -MF CMakeFiles/ggml-base.dir/gguf.cpp.o.d -o CMakeFiles/ggml-base.dir/gguf.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:4067:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4067 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-base.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-base.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -Wl,-soname,libggml-base.so -o ../../../../../lib/ollama/libggml-base.so "CMakeFiles/ggml-base.dir/ggml.c.o" "CMakeFiles/ggml-base.dir/ggml.cpp.o" "CMakeFiles/ggml-base.dir/ggml-alloc.c.o" "CMakeFiles/ggml-base.dir/ggml-backend.cpp.o" "CMakeFiles/ggml-base.dir/ggml-opt.cpp.o" "CMakeFiles/ggml-base.dir/ggml-threading.cpp.o" "CMakeFiles/ggml-base.dir/ggml-quants.c.o" "CMakeFiles/ggml-base.dir/gguf.cpp.o" -lm gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 11%] Built target ggml-base /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/DependInfo.cmake "--color=" cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 12%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o [ 13%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c [ 15%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o [ 15%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 19%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 23%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 24%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 25%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 27%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 32%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 35%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 36%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 36%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 39%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 39%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 41%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 42%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 45%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o [ 45%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 45%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 46%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 47%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 51%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c [ 52%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 53%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 54%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 55%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 56%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-x64.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-x64.dir/link.txt --verbose=1 In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 57%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sse42.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-sse42.dir/link.txt --verbose=1 [ 58%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 59%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp [ 61%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 62%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sandybridge.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-sandybridge.dir/link.txt --verbose=1 In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 62%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-alderlake.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-alderlake.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-x64.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-x64.so "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 62%] Built target ggml-cpu-x64 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-sse42.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-sse42.so "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 62%] Built target ggml-cpu-sse42 /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 63%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 64%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 65%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 66%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 66%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 67%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-sandybridge.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-sandybridge.so "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 67%] Built target ggml-cpu-sandybridge [ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 69%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 70%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 71%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 72%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-alderlake.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-alderlake.so "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 72%] Built target ggml-cpu-alderlake [ 73%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 73%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 74%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 75%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 77%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 78%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 81%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 84%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 86%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 87%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 87%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 88%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 92%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 92%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 94%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 95%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 95%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-haswell.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-haswell.dir/link.txt --verbose=1 [ 96%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 97%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 98%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-skylakex.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-skylakex.dir/link.txt --verbose=1 [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-icelake.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-icelake.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-haswell.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-haswell.so "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-haswell /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-skylakex.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-skylakex.so "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-skylakex /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-icelake.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-icelake.so "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-icelake /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Nothing to be done for 'ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build'. gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu gmake[2]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/CMakeFiles 0 gmake[1]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + /usr/bin/cmake -S . -B redhat-linux-build_ggml-rocm-6 -DCMAKE_C_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_CXX_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_Fortran_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_INSTALL_DO_STRIP:BOOL=OFF -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_INSTALL_FULL_SBINDIR:PATH=/usr/bin -DCMAKE_INSTALL_SBINDIR:PATH=bin -DINCLUDE_INSTALL_DIR:PATH=/usr/include -DLIB_INSTALL_DIR:PATH=/usr/lib64 -DSYSCONF_INSTALL_DIR:PATH=/etc -DSHARE_INSTALL_PREFIX:PATH=/usr/share -DLIB_SUFFIX=64 -DBUILD_SHARED_LIBS:BOOL=ON --preset 'ROCm 6' Preset CMake variables: AMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1151;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-" CMAKE_BUILD_TYPE="Release" CMAKE_HIP_FLAGS="-parallel-jobs=4" CMAKE_HIP_PLATFORM="amd" CMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded" -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - /usr/lib64/rocm/llvm/bin/clang++ CMake Warning (dev) at /usr/lib64/cmake/hip/hip-config-amd.cmake:70 (message): AMDGPU_TARGETS is deprecated. Please use GPU_TARGETS instead. Call Stack (most recent call first): /usr/lib64/cmake/hip/hip-config.cmake:159 (include) CMakeLists.txt:97 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. -- The HIP compiler identification is Clang 19.0.0 -- Detecting HIP compiler ABI info -- Detecting HIP compiler ABI info - done -- Check for working HIP compiler: /usr/lib64/rocm/llvm/bin/clang++ - skipped -- Detecting HIP compile features -- Detecting HIP compile features - done -- HIP and hipBLAS found -- Configuring done (3.6s) -- Generating done (0.0s) CMake Warning: Manually-specified variables were not used by the project: CMAKE_Fortran_FLAGS_RELEASE CMAKE_INSTALL_DO_STRIP INCLUDE_INSTALL_DIR LIB_SUFFIX SHARE_INSTALL_PREFIX SYSCONF_INSTALL_DIR -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 + /usr/bin/cmake --build redhat-linux-build_ggml-rocm-6 -j4 --verbose --target ggml-hip Change Dir: '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' Run Build Command(s): /usr/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f Makefile -j4 ggml-hip /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/gmake -f CMakeFiles/Makefile2 ggml-hip gmake[1]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/CMakeFiles 47 /usr/bin/gmake -f CMakeFiles/Makefile2 ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/all gmake[2]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [ 2%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o [ 4%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o -MF CMakeFiles/ggml-base.dir/ggml.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF CMakeFiles/ggml-base.dir/ggml.c.o.d -o CMakeFiles/ggml-base.dir/ggml.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5663:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5663 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5656:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5656 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-threading.cpp [ 6%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:4067:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4067 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -MF CMakeFiles/ggml-base.dir/gguf.cpp.o.d -o CMakeFiles/ggml-base.dir/gguf.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-base.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-base.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -Wl,-soname,libggml-base.so -o ../../../../../lib/ollama/libggml-base.so "CMakeFiles/ggml-base.dir/ggml.c.o" "CMakeFiles/ggml-base.dir/ggml.cpp.o" "CMakeFiles/ggml-base.dir/ggml-alloc.c.o" "CMakeFiles/ggml-base.dir/ggml-backend.cpp.o" "CMakeFiles/ggml-base.dir/ggml-opt.cpp.o" "CMakeFiles/ggml-base.dir/ggml-threading.cpp.o" "CMakeFiles/ggml-base.dir/ggml-quants.c.o" "CMakeFiles/ggml-base.dir/gguf.cpp.o" -lm gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [ 8%] Built target ggml-base /usr/bin/gmake -f ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/build.make ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/DependInfo.cmake "--color=" Dependee "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/DependInfo.cmake" is newer than depender "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/depend.internal". Dependee "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/CMakeDirectoryInformation.cmake" is newer than depender "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/depend.internal". Scanning dependencies of target ggml-hip gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/gmake -f ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/build.make ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/arange.cu.o [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/acc.cu.o [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/add-id.cu.o [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/argmax.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/acc.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/acc.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/add-id.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/add-id.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/argmax.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/argmax.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/arange.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/arange.cu [ 12%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/binbcast.cu.o [ 12%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/argsort.cu.o [ 14%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/clamp.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/argsort.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/argsort.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/binbcast.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/clamp.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/clamp.cu [ 14%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/concat.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/concat.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/concat.cu [ 14%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv-transpose-1d.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv-transpose-1d.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/conv-transpose-1d.cu [ 17%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-dw.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-dw.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-dw.cu [ 17%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-transpose.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-transpose.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-transpose.cu [ 19%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/convert.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/convert.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/convert.cu [ 19%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/count-equal.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/count-equal.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/count-equal.cu [ 21%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/cpy.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/cpy.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu [ 21%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/cross-entropy-loss.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/cross-entropy-loss.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/cross-entropy-loss.cu [ 23%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/diagmask.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/diagmask.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/diagmask.cu [ 23%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile-f16.cu [ 23%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f32.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f32.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile-f32.cu [ 25%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-wmma-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-wmma-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu [ 25%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn.cu [ 27%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/getrows.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/getrows.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/getrows.cu [ 27%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/ggml-cuda.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/ggml-cuda.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx1030. 1 warning generated when compiling for gfx1100. 1 warning generated when compiling for gfx1010. 1 warning generated when compiling for gfx1012. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx1101. 1 warning generated when compiling for gfx1102. 1 warning generated when compiling for gfx1200. 1 warning generated when compiling for gfx1151. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx906. 1 warning generated when compiling for gfx1201. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx900. 1 warning generated when compiling for gfx908. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx90a. 1 warning generated when compiling for gfx90a. [ 29%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/gla.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/gla.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/gla.cu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ [ 29%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/im2col.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/im2col.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/im2col.cu [ 29%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mean.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mean.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mean.cu 1 warning generated when compiling for gfx940. 1 warning generated when compiling for gfx941. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx942. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for host. [ 31%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmf.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmf.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmf.cu [ 31%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmq.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmq.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu [ 34%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvf.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvf.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmvf.cu [ 34%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvq.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvq.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmvq.cu [ 36%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/norm.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/norm.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/norm.cu [ 36%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/opt-step-adamw.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/opt-step-adamw.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-adamw.cu [ 38%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/out-prod.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/out-prod.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/out-prod.cu [ 38%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/pad.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/pad.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/pad.cu [ 38%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/pool2d.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/pool2d.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/pool2d.cu [ 40%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/quantize.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/quantize.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/quantize.cu [ 40%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/roll.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/roll.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/roll.cu [ 42%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/rope.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/rope.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/rope.cu [ 42%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/scale.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/scale.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/scale.cu [ 44%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/set-rows.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/set-rows.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/set-rows.cu [ 44%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/softcap.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/softcap.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/softcap.cu [ 46%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/softmax.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/softmax.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/softmax.cu [ 46%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-conv.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-conv.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ssm-conv.cu [ 46%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-scan.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-scan.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ssm-scan.cu [ 48%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/sum.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/sum.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/sum.cu [ 48%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/sumrows.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/sumrows.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/sumrows.cu [ 51%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/tsembd.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/tsembd.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/tsembd.cu [ 51%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/unary.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/unary.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/unary.cu [ 53%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/upscale.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/upscale.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/upscale.cu [ 53%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/wkv.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/wkv.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/wkv.cu [ 53%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu [ 55%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu [ 55%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu [ 57%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu [ 57%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu [ 59%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu [ 59%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu [ 61%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu [ 61%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu [ 61%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu [ 63%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu [ 63%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu [ 65%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu [ 65%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu [ 68%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu [ 68%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu [ 68%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu [ 70%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu [ 70%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu [ 72%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq1_s.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq1_s.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq1_s.cu [ 72%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_s.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_s.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_s.cu [ 74%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu [ 74%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu [ 76%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_s.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_s.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_s.cu [ 76%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu [ 76%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu [ 78%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu [ 78%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-mxfp4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-mxfp4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-mxfp4.cu [ 80%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q2_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q2_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q2_k.cu [ 80%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q3_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q3_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q3_k.cu [ 82%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_0.cu [ 82%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_1.cu [ 82%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_k.cu [ 85%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_0.cu [ 85%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_1.cu [ 87%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_k.cu [ 87%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q6_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q6_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q6_k.cu [ 89%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q8_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q8_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q8_0.cu [ 89%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu [ 91%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu [ 91%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu [ 91%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu [ 93%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu [ 93%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu [ 95%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu [ 95%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu [ 97%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu [ 97%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu [100%] Linking HIP shared module ../../../../../../lib/ollama/libggml-hip.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-hip.dir/link.txt --verbose=1 /usr/lib64/rocm/llvm/bin/clang++ -fPIC -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- --hip-link --rtlib=compiler-rt -unwindlib=libgcc -Xlinker --dependency-file=CMakeFiles/ggml-hip.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../../lib/ollama/libggml-hip.so "CMakeFiles/ggml-hip.dir/__/ggml-cuda/acc.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/add-id.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/arange.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/argmax.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/argsort.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/binbcast.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/clamp.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/concat.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv-transpose-1d.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-dw.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-transpose.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/convert.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/count-equal.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/cpy.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/cross-entropy-loss.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/diagmask.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f32.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-wmma-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/getrows.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/ggml-cuda.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/gla.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/im2col.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mean.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmf.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmq.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvf.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvq.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/norm.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/opt-step-adamw.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/out-prod.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/pad.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/pool2d.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/quantize.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/roll.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/rope.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/scale.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/set-rows.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/softcap.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/softmax.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-conv.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-scan.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/sum.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/sumrows.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/tsembd.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/unary.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/upscale.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/wkv.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq1_s.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_s.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_s.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-mxfp4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q2_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q3_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q6_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q8_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/lib/ollama: ../../../../../../liclang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-hardened-ld' [-Wunused-command-line-argument] clang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors' [-Wunused-command-line-argument] clang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1' [-Wunused-command-line-argument] clang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-package-notes' [-Wunused-command-line-argument] b/ollama/libggml-base.so /usr/lib64/libhipblas.so.2.4 /usr/lib64/librocblas.so.4.4 /usr/lib64/libamdhip64.so.6.4.43484 /usr/lib64/libamdhip64.so.6.4.43484 gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [100%] Built target ggml-hip gmake[2]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/CMakeFiles 0 gmake[1]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.kF80jk + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + '[' /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT '!=' / ']' + rm -rf /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT ++ dirname /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + mkdir -p /builddir/build/BUILD/ollama-0.12.3-build + mkdir /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml install --destdir /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT --install-directory /usr/share/licenses/ollama --filelist licenses.list Using detector: askalono + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin' + install -m 0755 -vp /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ollama' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig' + install -m 0644 -vp /builddir/build/SOURCES/sysconfig-ollama /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig/ollama '/builddir/build/SOURCES/sysconfig-ollama' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig/ollama' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system' + install -m 0644 -vp /builddir/build/SOURCES/ollama.service /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system/ollama.service '/builddir/build/SOURCES/ollama.service' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system/ollama.service' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d' + install -m 0644 -vp /builddir/build/SOURCES/ollama-user.conf /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d/ollama.conf '/builddir/build/SOURCES/ollama-user.conf' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d/ollama.conf' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib/ollama install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib/ollama' + DESTDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + /usr/bin/cmake --install redhat-linux-build_ggml-cpu --component CPU -- Install configuration: "Release" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-base.so -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so" to "" + DESTDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + /usr/bin/cmake --install redhat-linux-build_ggml-rocm-6 --component HIP -- Install configuration: "Release" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-hip.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-hip.so" to "" + /usr/bin/find-debuginfo -j4 --strict-build-id -m -i --build-id-seed 0.12.3-1.fc43 --unique-debug-suffix -0.12.3-1.fc43.x86_64 --unique-debug-src-base ollama-0.12.3-1.fc43.x86_64 --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 -S debugsourcefiles.list /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 find-debuginfo: starting Extracting debug info from 10 files warning: Unsupported auto-load script at offset 0 in section .debug_gdb_scripts of file /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ollama. Use `info auto-load python-scripts [REGEXP]' to list them. DWARF-compressing 10 files dwz: ./usr/lib64/ollama/libggml-hip.so-0.12.3-1.fc43.x86_64.debug: Unknown debugging section .debug_str_offsets dwz: ./usr/lib64/ollama/libggml-hip.so-0.12.3-1.fc43.x86_64.debug: Unknown debugging section .debug_str_offsets sepdebugcrcfix: Updated 9 CRC32s, 1 CRC32s did match. Creating .debug symlinks for symlinks to ELF files Copying sources found by 'debugedit -l' to /usr/src/debug/ollama-0.12.3-1.fc43.x86_64 find-debuginfo: done + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/redhat/brp-strip-lto /usr/bin/strip + /usr/lib/rpm/check-rpaths + /usr/lib/rpm/redhat/brp-mangle-shebangs + /usr/lib/rpm/brp-remove-la-files + /usr/lib/rpm/redhat/brp-python-rpm-in-distinfo + env /usr/lib/rpm/redhat/brp-python-bytecompile '' 1 0 -j4 + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/bin/add-determinism --brp -j4 /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT Scanned 497 directories and 1628 files, processed 0 inodes, 0 modified (0 replaced + 0 rewritten), 0 unsupported format, 0 errors Reading /builddir/build/BUILD/ollama-0.12.3-build/SPECPARTS/rpm-debuginfo.specpart Executing(%check): /bin/sh -e /var/tmp/rpm-tmp.NroUVn + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml report all --verify 'Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSL-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC0-1.0 AND ISC AND LicenseRef-Fedora-Public-Domain AND LicenseRef-scancode-protobuf AND MIT AND NCSA AND NTP AND OpenSSL AND ZPL-2.1 AND Zlib' Using detector: askalono LICENSE: MIT convert/sentencepiece/LICENSE: Apache-2.0 llama/llama.cpp/LICENSE: MIT ml/backend/ggml/ggml/LICENSE: MIT vendor/github.com/agnivade/levenshtein/License.txt: MIT vendor/github.com/apache/arrow/go/arrow/LICENSE.txt: (Apache-2.0 AND BSD-3-Clause) AND BSD-3-Clause AND CC0-1.0 AND (LicenseRef-scancode-public-domain AND MIT) AND Apache-2.0 AND BSL-1.0 AND (BSD-2-Clause AND BSD-3-Clause) AND MIT AND (BSL-1.0 AND BSD-2-Clause) AND BSD-2-Clause AND ZPL-2.1 AND LicenseRef-scancode-protobuf AND NCSA AND (CC-BY-3.0 AND MIT) AND (CC-BY-4.0 AND LicenseRef-scancode-public-domain) AND NTP AND Zlib AND OpenSSL AND (BSD-3-Clause AND BSD-2-Clause) AND (BSD-2-Clause AND Zlib) vendor/github.com/bytedance/sonic/LICENSE: Apache-2.0 vendor/github.com/bytedance/sonic/loader/LICENSE: Apache-2.0 vendor/github.com/chewxy/hm/LICENCE: MIT vendor/github.com/chewxy/math32/LICENSE: BSD-2-Clause vendor/github.com/cloudwego/base64x/LICENSE: Apache-2.0 vendor/github.com/cloudwego/base64x/LICENSE-APACHE: Apache-2.0 vendor/github.com/cloudwego/iasm/LICENSE-APACHE: Apache-2.0 vendor/github.com/containerd/console/LICENSE: Apache-2.0 vendor/github.com/d4l3k/go-bfloat16/LICENSE: MIT vendor/github.com/davecgh/go-spew/LICENSE: ISC vendor/github.com/dlclark/regexp2/LICENSE: MIT vendor/github.com/emirpasic/gods/v2/LICENSE: BSD-2-Clause AND ISC vendor/github.com/gabriel-vasile/mimetype/LICENSE: MIT vendor/github.com/gin-contrib/cors/LICENSE: MIT vendor/github.com/gin-contrib/sse/LICENSE: MIT vendor/github.com/gin-gonic/gin/LICENSE: MIT vendor/github.com/go-playground/locales/LICENSE: MIT vendor/github.com/go-playground/universal-translator/LICENSE: MIT vendor/github.com/go-playground/validator/v10/LICENSE: MIT vendor/github.com/goccy/go-json/LICENSE: MIT vendor/github.com/gogo/protobuf/LICENSE: BSD-3-Clause vendor/github.com/golang/protobuf/LICENSE: BSD-3-Clause vendor/github.com/google/flatbuffers/LICENSE: Apache-2.0 vendor/github.com/google/go-cmp/LICENSE: BSD-3-Clause vendor/github.com/google/uuid/LICENSE: BSD-3-Clause vendor/github.com/inconshreveable/mousetrap/LICENSE: Apache-2.0 vendor/github.com/json-iterator/go/LICENSE: MIT vendor/github.com/klauspost/cpuid/v2/LICENSE: MIT vendor/github.com/leodido/go-urn/LICENSE: MIT vendor/github.com/mattn/go-isatty/LICENSE: MIT vendor/github.com/mattn/go-runewidth/LICENSE: MIT vendor/github.com/modern-go/concurrent/LICENSE: Apache-2.0 vendor/github.com/modern-go/reflect2/LICENSE: Apache-2.0 vendor/github.com/nlpodyssey/gopickle/LICENSE: BSD-2-Clause vendor/github.com/olekukonko/tablewriter/LICENSE.md: MIT vendor/github.com/pdevine/tensor/LICENCE: Apache-2.0 vendor/github.com/pelletier/go-toml/v2/LICENSE: MIT vendor/github.com/pkg/errors/LICENSE: BSD-2-Clause vendor/github.com/pmezard/go-difflib/LICENSE: BSD-3-Clause vendor/github.com/rivo/uniseg/LICENSE.txt: MIT vendor/github.com/spf13/cobra/LICENSE.txt: Apache-2.0 vendor/github.com/spf13/pflag/LICENSE: BSD-3-Clause vendor/github.com/stretchr/testify/LICENSE: MIT vendor/github.com/twitchyliquid64/golang-asm/LICENSE: BSD-3-Clause vendor/github.com/ugorji/go/codec/LICENSE: MIT vendor/github.com/x448/float16/LICENSE: MIT vendor/github.com/xtgo/set/LICENSE: BSD-2-Clause vendor/go4.org/unsafe/assume-no-moving-gc/LICENSE: BSD-3-Clause vendor/golang.org/x/arch/LICENSE: BSD-3-Clause vendor/golang.org/x/crypto/LICENSE: BSD-3-Clause vendor/golang.org/x/exp/LICENSE: BSD-3-Clause vendor/golang.org/x/image/LICENSE: BSD-3-Clause vendor/golang.org/x/net/LICENSE: BSD-3-Clause vendor/golang.org/x/sync/LICENSE: BSD-3-Clause vendor/golang.org/x/sys/LICENSE: BSD-3-Clause vendor/golang.org/x/term/LICENSE: BSD-3-Clause vendor/golang.org/x/text/LICENSE: BSD-3-Clause vendor/golang.org/x/tools/LICENSE: BSD-3-Clause vendor/golang.org/x/xerrors/LICENSE: BSD-3-Clause vendor/gonum.org/v1/gonum/LICENSE: BSD-3-Clause vendor/google.golang.org/protobuf/LICENSE: BSD-3-Clause vendor/gopkg.in/yaml.v3/LICENSE: MIT AND (MIT AND Apache-2.0) vendor/gorgonia.org/vecf32/LICENSE: MIT vendor/gorgonia.org/vecf64/LICENSE: MIT Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSL-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC0-1.0 AND ISC AND LicenseRef-Fedora-Public-Domain AND LicenseRef-scancode-protobuf AND MIT AND NCSA AND NTP AND OpenSSL AND ZPL-2.1 AND Zlib + GO_LDFLAGS=' -X github.com/ollama/ollama/version=0.12.3' + GO_TEST_FLAGS='-buildmode pie -compiler gc' + GO_TEST_EXT_LD_FLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + go-rpm-integration check -i github.com/ollama/ollama -b /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin -s /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build -V 0.12.3-1.fc43 -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT -g /usr/share/gocode -r '.*example.*' Testing in: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src PATH: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin:/usr/bin:/bin:/usr/sbin:/sbin GOPATH: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode GO111MODULE: off command: go test -buildmode pie -compiler gc -ldflags " -X github.com/ollama/ollama/version=0.12.3 -extldflags '-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes '" testing: github.com/ollama/ollama github.com/ollama/ollama/api 2025/10/04 05:48:37 http: superfluous response.WriteHeader call from github.com/ollama/ollama/api.TestClientStream.func1.1 (client_test.go:128) PASS ok github.com/ollama/ollama/api 0.011s github.com/ollama/ollama/api 2025/10/04 05:48:37 http: superfluous response.WriteHeader call from github.com/ollama/ollama/api.TestClientStream.func1.1 (client_test.go:128) PASS ok github.com/ollama/ollama/api 0.010s github.com/ollama/ollama/app/assets ? github.com/ollama/ollama/app/assets [no test files] github.com/ollama/ollama/app/lifecycle PASS ok github.com/ollama/ollama/app/lifecycle 0.003s github.com/ollama/ollama/app/lifecycle PASS ok github.com/ollama/ollama/app/lifecycle 0.003s github.com/ollama/ollama/app/store ? github.com/ollama/ollama/app/store [no test files] github.com/ollama/ollama/app/tray ? github.com/ollama/ollama/app/tray [no test files] github.com/ollama/ollama/app/tray/commontray ? github.com/ollama/ollama/app/tray/commontray [no test files] github.com/ollama/ollama/auth ? github.com/ollama/ollama/auth [no test files] github.com/ollama/ollama/cmd [?25l[?2026h[?25l[?25h[?2026l[?25hdeleted 'test-model' [?25l[?2026h[?25l[?25h[?2026l[?25hCouldn't find '/tmp/TestPushHandlersuccessful_push1460693564/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJbmLhtLeY0MDOX05Hl+4gJhBgFuDIb2crLHqXxLsvMU Couldn't find '/tmp/TestPushHandlernot_signed_in_push4176593819/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIML/TgrjsMHn8MqgsbM7AgQYipVD9CUgplHKKCSJtbwn Couldn't find '/tmp/TestPushHandlerunauthorized_push3000847613/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO+7HQwT74Fn5Cn++4UzaDMddvBepU+b931636YEU5dQ Added image '/tmp/TestExtractFileDataRemovesQuotedFilepath141910831/001/img.jpg' PASS ok github.com/ollama/ollama/cmd 0.025s github.com/ollama/ollama/cmd [?25l[?2026h[?25l[?25h[?2026l[?25hdeleted 'test-model' [?25l[?2026h[?25l[?25h[?2026l[?25hCouldn't find '/tmp/TestPushHandlersuccessful_push1110002098/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICprhz/ucanK/0xeuBeXKMm9vMwrEOz+S/EfIALE3XAs Couldn't find '/tmp/TestPushHandlernot_signed_in_push1376565828/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJZfYK7V5N/bhC9eykPVKOGX2tvLatOhKHNMWvI/JBPk Couldn't find '/tmp/TestPushHandlerunauthorized_push4035183306/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILxbjk4uCcR5FeNoEGzsqsEXnEAHQOKO7c1Vbfsuex43 Added image '/tmp/TestExtractFileDataRemovesQuotedFilepath195653844/001/img.jpg' PASS ok github.com/ollama/ollama/cmd 0.025s github.com/ollama/ollama/convert PASS ok github.com/ollama/ollama/convert 0.014s github.com/ollama/ollama/convert PASS ok github.com/ollama/ollama/convert 0.014s github.com/ollama/ollama/convert/sentencepiece ? github.com/ollama/ollama/convert/sentencepiece [no test files] github.com/ollama/ollama/discover 2025/10/04 05:51:03 INFO example scenario="#5554 Docker Ollama container inside the LXC" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:51:03 INFO example scenario="#5554 LXC direct output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:51:03 INFO example scenario="#5554 LXC docker container output" cpus="[{ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:29 EfficiencyCoreCount:0 ThreadCount:29}]" 2025/10/04 05:51:03 INFO example scenario="#5554 LXC docker output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:51:03 INFO example scenario="#7359 VMware multi-core core VM" cpus="[{ID:0 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:10 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:12 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:14 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:2 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:4 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:6 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:8 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1}]" 2025/10/04 05:51:03 INFO example scenario="#7287 HyperV 2 socket exposed to VM" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:51:03 INFO looking for compatible GPUs 2025/10/04 05:51:03 INFO no compatible GPUs were discovered PASS ok github.com/ollama/ollama/discover 0.007s github.com/ollama/ollama/discover 2025/10/04 05:51:03 INFO example scenario="#5554 Docker Ollama container inside the LXC" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:51:03 INFO example scenario="#5554 LXC direct output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:51:03 INFO example scenario="#5554 LXC docker container output" cpus="[{ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:29 EfficiencyCoreCount:0 ThreadCount:29}]" 2025/10/04 05:51:03 INFO example scenario="#5554 LXC docker output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:51:03 INFO example scenario="#7359 VMware multi-core core VM" cpus="[{ID:0 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:10 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:12 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:14 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:2 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:4 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:6 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:8 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1}]" 2025/10/04 05:51:03 INFO example scenario="#7287 HyperV 2 socket exposed to VM" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:51:03 INFO looking for compatible GPUs 2025/10/04 05:51:03 INFO no compatible GPUs were discovered PASS ok github.com/ollama/ollama/discover 0.007s github.com/ollama/ollama/envconfig 2025/10/04 05:51:04 WARN invalid port, using default port=-1 default=11434 2025/10/04 05:51:04 WARN invalid port, using default port=66000 default=11434 2025/10/04 05:51:04 WARN invalid environment variable, using default key=OLLAMA_UINT value=-1 default=11434 2025/10/04 05:51:04 WARN invalid environment variable, using default key=OLLAMA_UINT value=0o10 default=11434 2025/10/04 05:51:04 WARN invalid environment variable, using default key=OLLAMA_UINT value=0x10 default=11434 2025/10/04 05:51:04 WARN invalid environment variable, using default key=OLLAMA_UINT value=string default=11434 PASS ok github.com/ollama/ollama/envconfig 0.005s github.com/ollama/ollama/envconfig 2025/10/04 05:51:04 WARN invalid port, using default port=-1 default=11434 2025/10/04 05:51:04 WARN invalid port, using default port=66000 default=11434 2025/10/04 05:51:04 WARN invalid environment variable, using default key=OLLAMA_UINT value=-1 default=11434 2025/10/04 05:51:04 WARN invalid environment variable, using default key=OLLAMA_UINT value=0o10 default=11434 2025/10/04 05:51:04 WARN invalid environment variable, using default key=OLLAMA_UINT value=0x10 default=11434 2025/10/04 05:51:04 WARN invalid environment variable, using default key=OLLAMA_UINT value=string default=11434 PASS ok github.com/ollama/ollama/envconfig 0.005s github.com/ollama/ollama/format PASS ok github.com/ollama/ollama/format 0.003s github.com/ollama/ollama/format PASS ok github.com/ollama/ollama/format 0.002s github.com/ollama/ollama/fs ? github.com/ollama/ollama/fs [no test files] github.com/ollama/ollama/fs/ggml PASS ok github.com/ollama/ollama/fs/ggml 0.004s github.com/ollama/ollama/fs/ggml PASS ok github.com/ollama/ollama/fs/ggml 0.005s github.com/ollama/ollama/fs/gguf PASS ok github.com/ollama/ollama/fs/gguf 0.004s github.com/ollama/ollama/fs/gguf PASS ok github.com/ollama/ollama/fs/gguf 0.004s github.com/ollama/ollama/fs/util/bufioutil PASS ok github.com/ollama/ollama/fs/util/bufioutil 0.002s github.com/ollama/ollama/fs/util/bufioutil PASS ok github.com/ollama/ollama/fs/util/bufioutil 0.002s github.com/ollama/ollama/harmony event: {} event: {Header:{Role:user Channel: Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_weather}} event: {Content:{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|message|>{"sunny": true, "temperature": 20}} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:User asks weather in SF. We need location. Use get_current_weather with location "San Francisco, CA".} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_current_weather}} event: {Content:{"location":"San Francisco, CA"}<|call|>} PASS ok github.com/ollama/ollama/harmony 0.003s github.com/ollama/ollama/harmony event: {} event: {Header:{Role:user Channel: Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_weather}} event: {Content:{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|message|>{"sunny": true, "temperature": 20}} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:User asks weather in SF. We need location. Use get_current_weather with location "San Francisco, CA".} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_current_weather}} event: {Content:{"location":"San Francisco, CA"}<|call|>} PASS ok github.com/ollama/ollama/harmony 0.003s github.com/ollama/ollama/kvcache PASS ok github.com/ollama/ollama/kvcache 0.002s github.com/ollama/ollama/kvcache PASS ok github.com/ollama/ollama/kvcache 0.003s github.com/ollama/ollama/llama PASS ok github.com/ollama/ollama/llama 0.004s github.com/ollama/ollama/llama PASS ok github.com/ollama/ollama/llama 0.004s github.com/ollama/ollama/llama/llama.cpp/common ? github.com/ollama/ollama/llama/llama.cpp/common [no test files] github.com/ollama/ollama/llama/llama.cpp/src ? github.com/ollama/ollama/llama/llama.cpp/src [no test files] github.com/ollama/ollama/llama/llama.cpp/tools/mtmd ? github.com/ollama/ollama/llama/llama.cpp/tools/mtmd [no test files] github.com/ollama/ollama/llm 2025/10/04 05:51:14 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:14 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:14 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:14 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:14 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:14 INFO aborting completion request due to client closing the connection PASS ok github.com/ollama/ollama/llm 0.006s github.com/ollama/ollama/llm 2025/10/04 05:51:15 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:15 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:15 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:15 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:15 INFO aborting completion request due to client closing the connection 2025/10/04 05:51:15 INFO aborting completion request due to client closing the connection PASS ok github.com/ollama/ollama/llm 0.006s github.com/ollama/ollama/logutil ? github.com/ollama/ollama/logutil [no test files] github.com/ollama/ollama/ml ? github.com/ollama/ollama/ml [no test files] github.com/ollama/ollama/ml/backend ? github.com/ollama/ollama/ml/backend [no test files] github.com/ollama/ollama/ml/backend/ggml ? github.com/ollama/ollama/ml/backend/ggml [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src ? github.com/ollama/ollama/ml/backend/ggml/ggml/src [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile [no test files] github.com/ollama/ollama/ml/nn ? github.com/ollama/ollama/ml/nn [no test files] github.com/ollama/ollama/ml/nn/fast ? github.com/ollama/ollama/ml/nn/fast [no test files] github.com/ollama/ollama/ml/nn/pooling 2025/10/04 05:51:17 INFO looking for compatible GPUs 2025/10/04 05:51:17 INFO no compatible GPUs were discovered 2025/10/04 05:51:17 INFO architecture=test file_type=unknown name="" description="" num_tensors=1 num_key_values=3 2025/10/04 05:51:17 INFO system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) PASS ok github.com/ollama/ollama/ml/nn/pooling 0.009s github.com/ollama/ollama/ml/nn/pooling 2025/10/04 05:51:18 INFO looking for compatible GPUs 2025/10/04 05:51:18 INFO no compatible GPUs were discovered 2025/10/04 05:51:18 INFO architecture=test file_type=unknown name="" description="" num_tensors=1 num_key_values=3 2025/10/04 05:51:18 INFO system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) PASS ok github.com/ollama/ollama/ml/nn/pooling 0.010s github.com/ollama/ollama/ml/nn/rope ? github.com/ollama/ollama/ml/nn/rope [no test files] github.com/ollama/ollama/model time=2025-10-04T05:51:19.452Z level=DEBUG msg="adding bos token to prompt" id=1 time=2025-10-04T05:51:19.452Z level=DEBUG msg="adding eos token to prompt" id=2 PASS ok github.com/ollama/ollama/model 0.475s github.com/ollama/ollama/model time=2025-10-04T05:51:20.196Z level=DEBUG msg="adding bos token to prompt" id=1 time=2025-10-04T05:51:20.196Z level=DEBUG msg="adding eos token to prompt" id=2 PASS ok github.com/ollama/ollama/model 0.256s github.com/ollama/ollama/model/imageproc PASS ok github.com/ollama/ollama/model/imageproc 0.023s github.com/ollama/ollama/model/imageproc PASS ok github.com/ollama/ollama/model/imageproc 0.023s github.com/ollama/ollama/model/input ? github.com/ollama/ollama/model/input [no test files] github.com/ollama/ollama/model/models ? github.com/ollama/ollama/model/models [no test files] github.com/ollama/ollama/model/models/bert ? github.com/ollama/ollama/model/models/bert [no test files] github.com/ollama/ollama/model/models/deepseek2 ? github.com/ollama/ollama/model/models/deepseek2 [no test files] github.com/ollama/ollama/model/models/gemma2 ? github.com/ollama/ollama/model/models/gemma2 [no test files] github.com/ollama/ollama/model/models/gemma3 ? github.com/ollama/ollama/model/models/gemma3 [no test files] github.com/ollama/ollama/model/models/gemma3n ? github.com/ollama/ollama/model/models/gemma3n [no test files] github.com/ollama/ollama/model/models/gptoss ? github.com/ollama/ollama/model/models/gptoss [no test files] github.com/ollama/ollama/model/models/llama ? github.com/ollama/ollama/model/models/llama [no test files] github.com/ollama/ollama/model/models/llama4 PASS ok github.com/ollama/ollama/model/models/llama4 0.013s github.com/ollama/ollama/model/models/llama4 PASS ok github.com/ollama/ollama/model/models/llama4 0.014s github.com/ollama/ollama/model/models/mistral3 ? github.com/ollama/ollama/model/models/mistral3 [no test files] github.com/ollama/ollama/model/models/mllama PASS ok github.com/ollama/ollama/model/models/mllama 0.503s github.com/ollama/ollama/model/models/mllama PASS ok github.com/ollama/ollama/model/models/mllama 0.494s github.com/ollama/ollama/model/models/qwen2 ? github.com/ollama/ollama/model/models/qwen2 [no test files] github.com/ollama/ollama/model/models/qwen25vl ? github.com/ollama/ollama/model/models/qwen25vl [no test files] github.com/ollama/ollama/model/models/qwen3 ? github.com/ollama/ollama/model/models/qwen3 [no test files] github.com/ollama/ollama/model/parsers PASS ok github.com/ollama/ollama/model/parsers 0.005s github.com/ollama/ollama/model/parsers PASS ok github.com/ollama/ollama/model/parsers 0.004s github.com/ollama/ollama/model/renderers PASS ok github.com/ollama/ollama/model/renderers 0.004s github.com/ollama/ollama/model/renderers PASS ok github.com/ollama/ollama/model/renderers 0.004s github.com/ollama/ollama/openai PASS ok github.com/ollama/ollama/openai 0.011s github.com/ollama/ollama/openai PASS ok github.com/ollama/ollama/openai 0.011s github.com/ollama/ollama/parser PASS ok github.com/ollama/ollama/parser 0.006s github.com/ollama/ollama/parser PASS ok github.com/ollama/ollama/parser 0.007s github.com/ollama/ollama/progress ? github.com/ollama/ollama/progress [no test files] github.com/ollama/ollama/readline ? github.com/ollama/ollama/readline [no test files] github.com/ollama/ollama/runner ? github.com/ollama/ollama/runner [no test files] github.com/ollama/ollama/runner/common PASS ok github.com/ollama/ollama/runner/common 0.002s github.com/ollama/ollama/runner/common PASS ok github.com/ollama/ollama/runner/common 0.002s github.com/ollama/ollama/runner/llamarunner PASS ok github.com/ollama/ollama/runner/llamarunner 0.005s github.com/ollama/ollama/runner/llamarunner PASS ok github.com/ollama/ollama/runner/llamarunner 0.005s github.com/ollama/ollama/runner/ollamarunner PASS ok github.com/ollama/ollama/runner/ollamarunner 0.005s github.com/ollama/ollama/runner/ollamarunner PASS ok github.com/ollama/ollama/runner/ollamarunner 0.005s github.com/ollama/ollama/sample PASS ok github.com/ollama/ollama/sample 0.181s github.com/ollama/ollama/sample PASS ok github.com/ollama/ollama/sample 0.187s github.com/ollama/ollama/server time=2025-10-04T05:51:34.580Z level=INFO source=logging.go:32 msg="ollama app started" time=2025-10-04T05:51:34.581Z level=DEBUG source=convert.go:232 msg="vocabulary is smaller than expected, padding with dummy tokens" expect=32000 actual=1 time=2025-10-04T05:51:34.587Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=general.file_type type=uint32 time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=general.quantization_version type=uint32 time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=llama.vocab_size type=uint32 time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.pre type=string time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:34.587Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:34.617Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.617Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.618Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.618Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.618Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.618Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.618Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:51:34.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.619Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:51:34.619Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.619Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.619Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.620Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.620Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.620Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:51:34.620Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.620Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:51:34.620Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.620Z level=ERROR source=images.go:157 msg="unknown capability" capability=unknown time=2025-10-04T05:51:34.621Z level=WARN source=manifest.go:160 msg="bad manifest name" path=host/namespace/model/.hidden time=2025-10-04T05:51:34.622Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:51:34.622Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:51:34.622Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:51:34.623Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=4 time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:34.623Z level=WARN source=quantization.go:145 msg="tensor cols 100 are not divisible by 32, required for Q8_0 - using fallback quantization F16" time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:34.623Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.623Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.623Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[512 2]" offset=0 time=2025-10-04T05:51:34.623Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:51:34.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:34.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:34.624Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=output.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:51:34.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=12 shape="[512 2]" offset=0 time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:627 msg=output.weight kind=14 shape="[256 4]" offset=576 time=2025-10-04T05:51:34.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape="[512 2]" offset=0 time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=4096 time=2025-10-04T05:51:34.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:34.624Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=blk.0.attn_v.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:51:34.624Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:51:34.624Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=14 shape="[512 2]" offset=0 time=2025-10-04T05:51:34.625Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=864 time=2025-10-04T05:51:34.625Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.625Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.625Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.625Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[32 16 2]" offset=0 time=2025-10-04T05:51:34.625Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:51:34.625Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.625Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:34.625Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:34.625Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.625Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.625Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:51:34.625Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:51:34.625Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=8 shape="[32 16 2]" offset=0 time=2025-10-04T05:51:34.625Z level=DEBUG source=gguf.go:627 msg=output.weight kind=8 shape="[256 4]" offset=1088 time=2025-10-04T05:51:34.625Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.625Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.626Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.627Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.628Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.628Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.629Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.630Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.631Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.632Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.632Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:34.632Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.632Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.632Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.632Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.633Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:34.634Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.634Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.634Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.634Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.634Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:51:34.635Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.635Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.636Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.637Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.638Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.638Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.638Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.638Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.638Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.638Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.638Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.639Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.639Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.639Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.639Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.639Z level=DEBUG source=create.go:98 msg="create model from model name" from=bob resp = api.ShowResponse{License:"", Modelfile:"# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM test:latest\n\nFROM \nTEMPLATE {{ .Prompt }}\n", Parameters:"", Template:"{{ .Prompt }}", System:"", Renderer:"", Parser:"", Details:api.ModelDetails{ParentModel:"", Format:"", Family:"gptoss", Families:[]string{"gptoss"}, ParameterSize:"20.9B", QuantizationLevel:"MXFP4"}, Messages:[]api.Message(nil), RemoteModel:"bob", RemoteHost:"https://ollama.com:11434", ModelInfo:map[string]interface {}{"general.architecture":"gptoss", "gptoss.context_length":131072, "gptoss.embedding_length":2880}, ProjectorInfo:map[string]interface {}(nil), Tensors:[]api.Tensor(nil), Capabilities:[]model.Capability{"completion", "tools", "thinking"}, ModifiedAt:time.Date(2025, time.October, 4, 5, 51, 34, 639847205, time.UTC)} time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.640Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.641Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.641Z level=DEBUG source=gguf.go:578 msg=tokenizer.chat_template type=string time=2025-10-04T05:51:34.641Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.641Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.641Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.641Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.641Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.641Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.641Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.650Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.650Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.650Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.650Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.650Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.651Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.652Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:34.652Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:34.652Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:34.653Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.653Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.653Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.653Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.653Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.654Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-04T05:51:34.655Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-10-04T05:51:34.655Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so* time=2025-10-04T05:51:34.655Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build1369899547/b001/libcuda.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-10-04T05:51:34.655Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:51:34.655Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcudart.so* time=2025-10-04T05:51:34.655Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build1369899547/b001/libcudart.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcudart.so* /tmp/go-build1369899547/b001/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-10-04T05:51:34.656Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:51:34.656Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-10-04T05:51:34.656Z level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-04T05:51:34.656Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:51:34.656Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.656Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.657Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.658Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.658Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.659Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.659Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.659Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.661Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.661Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.661Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.663Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.663Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.663Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.665Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.665Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.665Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.667Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.667Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.667Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.669Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.669Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.669Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.670Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.670Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.670Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.672Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.672Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.672Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.674Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.674Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.674Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.675Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.675Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.675Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly1778946228/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.677Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:34.677Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:34.677Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:34.678Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:34.678Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:34.678Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.678Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.678Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.678Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.678Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.679Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.679Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.679Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.681Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.681Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.681Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.683Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.683Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.683Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.684Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.684Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.684Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.687Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.687Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.687Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.688Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.689Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.689Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.690Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.690Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.690Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.692Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.692Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.692Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.694Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.694Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.694Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.696Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.696Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.696Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly876376161/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.697Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:34.697Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.698Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:34.699Z level=DEBUG source=manifest.go:53 msg="layer does not exist" digest=sha256:776957f9c9239232f060e29d642d8f5ef3bb931f485c27a13ae6385515fb425c time=2025-10-04T05:51:34.699Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:34.699Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:34.699Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:34.700Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.700Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.700Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.700Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.700Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.701Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.701Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:51:34.701Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.701Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.701Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.701Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:51:34.701Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:51:34.701Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.701Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.701Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.703Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.703Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.703Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3393902427/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.705Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.705Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.705Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3393902427/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.706Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:34.707Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.707Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.707Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.707Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.707Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3393902427/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.709Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.709Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.709Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3393902427/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.712Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.712Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.712Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3393902427/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.714Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.714Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.714Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3393902427/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.716Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.716Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.716Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3393902427/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.748Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:34.748Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:34.748Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.748Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:34.748Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:34.749Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.749Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.749Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.749Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.749Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.749Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.750Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:51:34.750Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:51:34.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:51:34.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.752Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.752Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.752Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2048784056/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.754Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.754Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.754Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2048784056/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.755Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:34.756Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.756Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.756Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.756Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.756Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2048784056/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.759Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.759Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.759Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2048784056/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.761Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.761Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.761Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2048784056/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.762Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:34.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.762Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.763Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.763Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.763Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2048784056/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.765Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.765Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.765Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2048784056/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.767Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.767Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2048784056/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.768Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:34.768Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:34.769Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:34.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:34.769Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:34.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.771Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.771Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.771Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2051499523/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.772Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.773Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2051499523/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.774Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.774Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.774Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2051499523/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.776Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.776Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2051499523/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.778Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.779Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2051499523/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:34.821Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:34.821Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:34.821Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.821Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:34.821Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:34.822Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.822Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.822Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:34.822Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:34.822Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.822Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.822Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.823Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.823Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimecontent_streams_as_it_arr2880964676/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:34.914Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:34.914Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:34.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.914Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:34.914Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:34.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:34.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:34.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:34.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:34.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:34.916Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:34.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:34.916Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimethinking_streams_separate2170233374/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:35.037Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.037Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.038Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:35.038Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:35.038Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:35.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:35.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.039Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.040Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:35.040Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.040Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimepartial_tags_buffer_until127127538/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:35.192Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.192Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.192Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:35.193Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:35.193Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.193Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.193Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:35.193Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:35.193Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.193Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.193Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.194Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:35.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.194Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimesimple_assistant_after_an2211587965/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:35.224Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.225Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.225Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:35.225Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.227Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:35.227Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.227Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_parsed_and_retu913524725/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:35.258Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.258Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.258Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.258Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:35.258Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:35.259Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.259Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.259Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:35.259Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:35.259Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.259Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.259Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.260Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:35.260Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.260Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_with_streaming_270140157/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:35.351Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.351Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.351Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:35.352Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:35.352Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.352Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.352Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.352Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:35.352Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:35.352Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.352Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.352Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.353Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:35.353Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.353Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingSimple2933090432/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:35.354Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.354Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.354Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.354Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:35.354Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:35.355Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.355Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.355Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:35.355Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:35.355Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.355Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.355Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.356Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:35.357Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.357Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingsimple_message_without_thinking1200763466/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:35.357Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.357Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.357Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.357Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:35.357Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:35.358Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.358Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.358Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:35.358Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:35.358Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.358Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.358Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.358Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:35.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.359Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingmessage_with_analysis_channel_for3661027323/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:35.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:35.359Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:35.359Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.359Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.359Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:35.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:35.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.359Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.360Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:35.360Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.360Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingstreaming_with_partial_tags_acros1340559513/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:35.360Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.360Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.361Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.362Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.362Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.362Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.362Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.362Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.362Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.362Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.362Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.363Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.364Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.365Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:51:35 | 200 | 20.032µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/04 - 05:51:35 | 200 | 548.626µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/04 - 05:51:35 | 200 | 136.913µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:51:35 | 200 | 137.807µs | 127.0.0.1 | GET "/api/tags" time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.370Z level=INFO source=images.go:518 msg="total blobs: 3" time=2025-10-04T05:51:35.370Z level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-04T05:51:35.370Z level=INFO source=server.go:164 msg=http status=200 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:37254 proto=HTTP/1.1 query="" time=2025-10-04T05:51:35.371Z level=WARN source=server.go:164 msg=http error="model not found" status=404 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:37254 proto=HTTP/1.1 query="" [GIN] 2025/10/04 - 05:51:35 | 200 | 148.325µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.371Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:51:35 | 200 | 475.846µs | 127.0.0.1 | POST "/api/create" time=2025-10-04T05:51:35.372Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.372Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.373Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.373Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.373Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.373Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.373Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.373Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.373Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.373Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.373Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:51:35 | 200 | 394.181µs | 127.0.0.1 | POST "/api/copy" time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.374Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.375Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:51:35 | 200 | 521.431µs | 127.0.0.1 | POST "/api/show" time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.376Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:51:35 | 200 | 419.979µs | 127.0.0.1 | GET "/v1/models/show-model" [GIN] 2025/10/04 - 05:51:35 | 405 | 876ns | 127.0.0.1 | GET "/api/show" time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.379Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.381Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.381Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.381Z level=DEBUG source=gguf.go:578 msg=general.type type=string time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.block_count default=0 time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.vision.block_count default=0 time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.block_count default=0 time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.vision.block_count default=0 time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:35.382Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.382Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.382Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:51:35.382Z level=INFO source=sched.go:417 msg="NewLlamaServer failed" model=foo error="something failed to load model blah: this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull `" time=2025-10-04T05:51:35.382Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:51:35.382Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:35.382Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:35.382Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open dummy_model_path: no such file or directory" time=2025-10-04T05:51:35.382Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:51:35.382Z level=ERROR source=sched.go:476 msg="error loading llama server" error="wait failure" time=2025-10-04T05:51:35.382Z level=DEBUG source=sched.go:478 msg="triggering expiration for failed load" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=dummy_model_path runner.num_ctx=4096 time=2025-10-04T05:51:35.383Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.383Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.383Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.383Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.384Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.384Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.384Z level=INFO source=sched_test.go:179 msg=a time=2025-10-04T05:51:35.384Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.384Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.384Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSameModelSameRequest2850521560/002/3483046781 time=2025-10-04T05:51:35.384Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:35.384Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest2850521560/002/3483046781 runner.num_ctx=4096 time=2025-10-04T05:51:35.384Z level=INFO source=sched_test.go:196 msg=b time=2025-10-04T05:51:35.384Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSameModelSameRequest2850521560/002/3483046781 time=2025-10-04T05:51:35.384Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.384Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.384Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.384Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest2850521560/002/3483046781 runner.num_ctx=4096 time=2025-10-04T05:51:35.385Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.384Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.385Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.385Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.385Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.386Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.386Z level=INFO source=sched_test.go:223 msg=a time=2025-10-04T05:51:35.386Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.386Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.386Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 time=2025-10-04T05:51:35.386Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:35.386Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 time=2025-10-04T05:51:35.386Z level=INFO source=sched_test.go:241 msg=b time=2025-10-04T05:51:35.386Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 time=2025-10-04T05:51:35.386Z level=DEBUG source=sched.go:154 msg=reloading runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 time=2025-10-04T05:51:35.386Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:51:35.386Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 time=2025-10-04T05:51:35.387Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 time=2025-10-04T05:51:35.387Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="20 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel3986125753/002/4136285263 runner.num_ctx=4096 time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.387Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.387Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.387Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.388Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.388Z level=INFO source=sched_test.go:274 msg=a time=2025-10-04T05:51:35.388Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.388Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.388Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 time=2025-10-04T05:51:35.389Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 time=2025-10-04T05:51:35.389Z level=INFO source=sched_test.go:293 msg=b time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:51:35.389Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:51:35.389Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:51:35.389Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 time=2025-10-04T05:51:35.389Z level=INFO source=sched_test.go:311 msg=c time=2025-10-04T05:51:35.389Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=cpu available="24.2 GiB" time=2025-10-04T05:51:35.389Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=cpu total="29.8 GiB" available="19.6 GiB" time=2025-10-04T05:51:35.389Z level=INFO source=sched.go:470 msg="loaded runners" count=3 time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-4a runner.inference=cpu runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/006/3436102216 runner.num_ctx=4096 time=2025-10-04T05:51:35.389Z level=INFO source=sched_test.go:329 msg=d time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:51:35.389Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:162 msg="max runners achieved, unloading one to make room" runner_count=3 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 runner.num_ctx=4096 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/002/2361540790 time=2025-10-04T05:51:35.391Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:51:35.391Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="3.7 GiB" time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:51:35.391Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 runner.num_ctx=4096 time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 time=2025-10-04T05:51:35.397Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/004/3075056390 time=2025-10-04T05:51:35.398Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.398Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:51:35.398Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:51:35.398Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:51:35.398Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:51:35.398Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3c runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1835972886/008/374476480 runner.num_ctx=4096 time=2025-10-04T05:51:35.398Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.398Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.398Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.398Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.398Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.398Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.398Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.398Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.398Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.398Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.398Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.398Z level=INFO source=sched_test.go:367 msg=a time=2025-10-04T05:51:35.398Z level=INFO source=sched_test.go:370 msg=b time=2025-10-04T05:51:35.398Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.399Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.399Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGetRunner3996652952/002/1550152148 time=2025-10-04T05:51:35.399Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:35.399Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 runner.num_ctx=4096 time=2025-10-04T05:51:35.399Z level=INFO source=sched_test.go:394 msg=c time=2025-10-04T05:51:35.399Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open bad path: no such file or directory" time=2025-10-04T05:51:35.399Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.399Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 runner.num_ctx=4096 duration=2ms time=2025-10-04T05:51:35.399Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:35.401Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 runner.num_ctx=4096 time=2025-10-04T05:51:35.401Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 runner.num_ctx=4096 time=2025-10-04T05:51:35.401Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 runner.num_ctx=4096 time=2025-10-04T05:51:35.401Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 runner.num_ctx=4096 time=2025-10-04T05:51:35.401Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 runner.num_ctx=4096 time=2025-10-04T05:51:35.401Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 time=2025-10-04T05:51:35.401Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner3996652952/002/1550152148 time=2025-10-04T05:51:35.401Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.450Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:51:35.450Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:51:35.450Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:51:35.470Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.470Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.470Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.470Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.471Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.471Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.471Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.471Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestPrematureExpired534776242/002/4273504476 time=2025-10-04T05:51:35.471Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:35.471Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 time=2025-10-04T05:51:35.471Z level=INFO source=sched_test.go:481 msg="sending premature expired event now" time=2025-10-04T05:51:35.471Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 time=2025-10-04T05:51:35.471Z level=DEBUG source=sched.go:310 msg="expired event with positive ref count, retrying" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:51:35.476Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:35.476Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:51:35.476Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 runner.num_ctx=4096 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 time=2025-10-04T05:51:35.481Z level=DEBUG source=sched.go:332 msg="duplicate expired event, ignoring" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired534776242/002/4273504476 time=2025-10-04T05:51:35.507Z level=ERROR source=sched.go:272 msg="finished request signal received after model unloaded" modelPath=/tmp/TestPrematureExpired534776242/002/4273504476 time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=1 library=a available="900 B" time=2025-10-04T05:51:35.512Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=1 library=a total="1000 B" available="825 B" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=2 library=a available="1.9 KiB" time=2025-10-04T05:51:35.512Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=2 library=a total="2.0 KiB" available="1.8 KiB" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=b time=2025-10-04T05:51:35.512Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=a time=2025-10-04T05:51:35.513Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:35.513Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:35.513Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:35.513Z level=INFO source=sched_test.go:669 msg=scenario1a time=2025-10-04T05:51:35.513Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:35.513Z level=DEBUG source=sched.go:142 msg="pending request cancelled or timed out, skipping scheduling" PASS ok github.com/ollama/ollama/server 0.951s github.com/ollama/ollama/server time=2025-10-04T05:51:36.730Z level=INFO source=logging.go:32 msg="ollama app started" time=2025-10-04T05:51:36.731Z level=DEBUG source=convert.go:232 msg="vocabulary is smaller than expected, padding with dummy tokens" expect=32000 actual=1 time=2025-10-04T05:51:36.737Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=general.file_type type=uint32 time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=general.quantization_version type=uint32 time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=llama.vocab_size type=uint32 time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.pre type=string time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:36.737Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:36.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.767Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.769Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.769Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.769Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:51:36.769Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.769Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:51:36.769Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.770Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.770Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.770Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:51:36.770Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.770Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:51:36.770Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.770Z level=ERROR source=images.go:157 msg="unknown capability" capability=unknown time=2025-10-04T05:51:36.771Z level=WARN source=manifest.go:160 msg="bad manifest name" path=host/namespace/model/.hidden time=2025-10-04T05:51:36.772Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:51:36.772Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:51:36.772Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:51:36.773Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=4 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:36.773Z level=WARN source=quantization.go:145 msg="tensor cols 100 are not divisible by 32, required for Q8_0 - using fallback quantization F16" time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.773Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.773Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[512 2]" offset=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=output.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:51:36.773Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.773Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.773Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:51:36.773Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:51:36.773Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=12 shape="[512 2]" offset=0 time=2025-10-04T05:51:36.773Z level=DEBUG source=gguf.go:627 msg=output.weight kind=14 shape="[256 4]" offset=576 time=2025-10-04T05:51:36.774Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.775Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.775Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.775Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape="[512 2]" offset=0 time=2025-10-04T05:51:36.775Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=4096 time=2025-10-04T05:51:36.775Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.775Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:36.775Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=blk.0.attn_v.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:51:36.775Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.775Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.775Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:51:36.775Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:51:36.775Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=14 shape="[512 2]" offset=0 time=2025-10-04T05:51:36.775Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=864 time=2025-10-04T05:51:36.775Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.776Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.776Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[32 16 2]" offset=0 time=2025-10-04T05:51:36.776Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:51:36.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:36.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:51:36.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.776Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.776Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:51:36.776Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:51:36.776Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=8 shape="[32 16 2]" offset=0 time=2025-10-04T05:51:36.776Z level=DEBUG source=gguf.go:627 msg=output.weight kind=8 shape="[256 4]" offset=1088 time=2025-10-04T05:51:36.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.776Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.777Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.778Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.778Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.779Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.783Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.784Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:36.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:51:36.785Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.785Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:51:36.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.786Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.787Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.787Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.787Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.787Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.788Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.789Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.790Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.790Z level=DEBUG source=create.go:98 msg="create model from model name" from=bob resp = api.ShowResponse{License:"", Modelfile:"# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM test:latest\n\nFROM \nTEMPLATE {{ .Prompt }}\n", Parameters:"", Template:"{{ .Prompt }}", System:"", Renderer:"", Parser:"", Details:api.ModelDetails{ParentModel:"", Format:"", Family:"gptoss", Families:[]string{"gptoss"}, ParameterSize:"20.9B", QuantizationLevel:"MXFP4"}, Messages:[]api.Message(nil), RemoteModel:"bob", RemoteHost:"https://ollama.com:11434", ModelInfo:map[string]interface {}{"general.architecture":"gptoss", "gptoss.context_length":131072, "gptoss.embedding_length":2880}, ProjectorInfo:map[string]interface {}(nil), Tensors:[]api.Tensor(nil), Capabilities:[]model.Capability{"completion", "tools", "thinking"}, ModifiedAt:time.Date(2025, time.October, 4, 5, 51, 36, 790522570, time.UTC)} time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.791Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.791Z level=DEBUG source=gguf.go:578 msg=tokenizer.chat_template type=string time=2025-10-04T05:51:36.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.792Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.800Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.801Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.802Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:36.803Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:36.803Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:36.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.804Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-04T05:51:36.805Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-10-04T05:51:36.805Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so* time=2025-10-04T05:51:36.805Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build2359792681/b001/libcuda.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-10-04T05:51:36.805Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:51:36.805Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcudart.so* time=2025-10-04T05:51:36.805Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build2359792681/b001/libcudart.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcudart.so* /tmp/go-build2359792681/b001/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-10-04T05:51:36.805Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:51:36.806Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-10-04T05:51:36.806Z level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-04T05:51:36.806Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:51:36.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.806Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.808Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.808Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.808Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.809Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.809Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.810Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.811Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.811Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.811Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.813Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.813Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.813Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.814Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.815Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.815Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.816Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.816Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.816Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.818Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.818Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.818Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.820Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.820Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.820Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.822Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.822Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.822Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.823Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.823Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.823Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.825Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.825Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.825Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly3465905223/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.826Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:36.826Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:36.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.827Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:36.827Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:36.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.827Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.829Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.829Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.829Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.831Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.831Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.831Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.832Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.832Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.832Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.834Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.834Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.834Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.836Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.836Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.836Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.838Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.838Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.838Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.840Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.840Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.840Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.842Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.842Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.842Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.844Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.844Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.844Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.845Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.845Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.845Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly2732848944/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.847Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:36.847Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.847Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:36.850Z level=DEBUG source=manifest.go:53 msg="layer does not exist" digest=sha256:776957f9c9239232f060e29d642d8f5ef3bb931f485c27a13ae6385515fb425c time=2025-10-04T05:51:36.850Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:36.850Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:36.850Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:36.850Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.850Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.850Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.850Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.850Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.851Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.851Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:51:36.851Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.851Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.851Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.851Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:51:36.851Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:51:36.851Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.851Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.851Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.853Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.853Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.853Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1122312119/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.854Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.854Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.854Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1122312119/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.856Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:36.856Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.856Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.857Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.858Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.858Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1122312119/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.859Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.859Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.859Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1122312119/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.861Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.861Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.861Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1122312119/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.863Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.863Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.863Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1122312119/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.865Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.865Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.865Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1122312119/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.898Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:36.898Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:36.898Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:36.898Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:36.898Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.899Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:51:36.899Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:51:36.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.900Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.900Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.901Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.901Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.901Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3870140109/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.903Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.903Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.903Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3870140109/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.904Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:36.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.906Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.906Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3870140109/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.908Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.908Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3870140109/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.910Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.910Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.910Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3870140109/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.911Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:51:36.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.912Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.912Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3870140109/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.914Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.915Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3870140109/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.916Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.916Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate3870140109/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.918Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:36.918Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:36.918Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:36.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:36.918Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:36.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.919Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.919Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag614488171/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.921Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.921Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag614488171/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.923Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.923Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag614488171/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.925Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.926Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag614488171/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.927Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.927Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag614488171/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:51:36.970Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:36.970Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:36.970Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:36.970Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:36.970Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:36.970Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.970Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.970Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:36.970Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:36.970Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:36.970Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:36.970Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:36.971Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:36.971Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:36.971Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimecontent_streams_as_it_arr801704008/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.063Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.063Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.063Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.063Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:37.063Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:37.064Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:37.064Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:37.064Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:37.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:37.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:37.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.064Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.066Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:37.066Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.066Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimethinking_streams_separate2255017605/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.187Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.187Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.187Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:37.187Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:37.188Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:37.188Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:37.188Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:37.188Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:37.188Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:37.188Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:37.188Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:37.188Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:37.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:37.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:37.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.189Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:37.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.189Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimepartial_tags_buffer_until1569407716/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.341Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.341Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.341Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.341Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:37.341Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:37.342Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.342Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.342Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:37.342Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:37.342Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.342Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.342Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.344Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:37.344Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.344Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimesimple_assistant_after_an3211399405/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.374Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.374Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.375Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.375Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:37.375Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:37.375Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.375Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.375Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:37.375Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:37.375Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.375Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.375Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.376Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:37.377Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.377Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_parsed_and_retu196356480/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.407Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.407Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.407Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.407Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.407Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.407Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.407Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.407Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.407Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.407Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.407Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.407Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.407Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:37.408Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:37.408Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.408Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.408Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:37.408Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:37.408Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.408Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.408Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.409Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:37.409Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.409Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_with_streaming_1416141343/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.501Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.501Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.501Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.501Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:37.501Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:37.502Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.502Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.502Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:37.502Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:37.502Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.502Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.502Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.503Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:37.503Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.503Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingSimple1877666403/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.503Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.503Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.503Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.503Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.503Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.503Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.503Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.503Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.503Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.503Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.503Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.503Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.503Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:37.504Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:37.504Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.504Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.504Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:37.504Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:37.504Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.504Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.504Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.505Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:37.505Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.505Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingsimple_message_without_thinking2945727847/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.505Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.505Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.506Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.506Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:37.506Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:37.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:37.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:37.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.508Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:37.508Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.508Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingmessage_with_analysis_channel_for3777138396/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.508Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.509Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.509Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.509Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:51:37.509Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:51:37.509Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.509Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.509Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:51:37.509Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:51:37.509Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.509Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.509Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.510Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.1 GiB" before.free_swap="136.2 GiB" now.total="7.6 GiB" now.free="1.1 GiB" now.free_swap="136.2 GiB" time=2025-10-04T05:51:37.510Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.510Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingstreaming_with_partial_tags_acros1361866742/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:51:37.510Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.510Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.511Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.512Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.514Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.515Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.516Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:51:37 | 200 | 24.675µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/04 - 05:51:37 | 200 | 46.796µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/04 - 05:51:37 | 200 | 101.399µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.519Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:51:37 | 200 | 127.133µs | 127.0.0.1 | GET "/api/tags" time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.520Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.521Z level=INFO source=images.go:518 msg="total blobs: 3" time=2025-10-04T05:51:37.521Z level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-04T05:51:37.521Z level=INFO source=server.go:164 msg=http status=200 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:54068 proto=HTTP/1.1 query="" time=2025-10-04T05:51:37.521Z level=WARN source=server.go:164 msg=http error="model not found" status=404 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:54068 proto=HTTP/1.1 query="" [GIN] 2025/10/04 - 05:51:37 | 200 | 186.685µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.522Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:51:37 | 200 | 398.96µs | 127.0.0.1 | POST "/api/create" time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.523Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:51:37 | 200 | 308.937µs | 127.0.0.1 | POST "/api/copy" time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.524Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:51:37 | 200 | 468.756µs | 127.0.0.1 | POST "/api/show" time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.525Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.526Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:51:37 | 200 | 545.376µs | 127.0.0.1 | GET "/v1/models/show-model" [GIN] 2025/10/04 - 05:51:37 | 405 | 810ns | 127.0.0.1 | GET "/api/show" time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.527Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.528Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.530Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.530Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.530Z level=DEBUG source=gguf.go:578 msg=general.type type=string time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.block_count default=0 time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.vision.block_count default=0 time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.block_count default=0 time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.vision.block_count default=0 time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.530Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:51:37.531Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.531Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.531Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:51:37.531Z level=INFO source=sched.go:417 msg="NewLlamaServer failed" model=foo error="something failed to load model blah: this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull `" time=2025-10-04T05:51:37.531Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:51:37.531Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:37.531Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:37.531Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open dummy_model_path: no such file or directory" time=2025-10-04T05:51:37.531Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:51:37.531Z level=ERROR source=sched.go:476 msg="error loading llama server" error="wait failure" time=2025-10-04T05:51:37.531Z level=DEBUG source=sched.go:478 msg="triggering expiration for failed load" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=dummy_model_path runner.num_ctx=4096 time=2025-10-04T05:51:37.533Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.533Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.533Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.533Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.533Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.533Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.533Z level=INFO source=sched_test.go:179 msg=a time=2025-10-04T05:51:37.533Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.534Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.534Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSameModelSameRequest2019544428/002/369306147 time=2025-10-04T05:51:37.534Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:37.534Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest2019544428/002/369306147 runner.num_ctx=4096 time=2025-10-04T05:51:37.534Z level=INFO source=sched_test.go:196 msg=b time=2025-10-04T05:51:37.534Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSameModelSameRequest2019544428/002/369306147 time=2025-10-04T05:51:37.534Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.534Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest2019544428/002/369306147 runner.num_ctx=4096 time=2025-10-04T05:51:37.534Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.535Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.535Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.535Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.535Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.535Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.535Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.535Z level=INFO source=sched_test.go:223 msg=a time=2025-10-04T05:51:37.535Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.535Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.535Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 time=2025-10-04T05:51:37.535Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:37.535Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 time=2025-10-04T05:51:37.535Z level=INFO source=sched_test.go:241 msg=b time=2025-10-04T05:51:37.536Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 time=2025-10-04T05:51:37.536Z level=DEBUG source=sched.go:154 msg=reloading runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 time=2025-10-04T05:51:37.536Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:51:37.536Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 time=2025-10-04T05:51:37.537Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 time=2025-10-04T05:51:37.537Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="20 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel2692693810/002/3183682686 runner.num_ctx=4096 time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.537Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.537Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.537Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.537Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.537Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.538Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.538Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.538Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.538Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.538Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.538Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.538Z level=INFO source=sched_test.go:274 msg=a time=2025-10-04T05:51:37.538Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.538Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.538Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 time=2025-10-04T05:51:37.538Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:37.538Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 time=2025-10-04T05:51:37.538Z level=INFO source=sched_test.go:293 msg=b time=2025-10-04T05:51:37.538Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:51:37.538Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.538Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:51:37.538Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:51:37.538Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:51:37.538Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:51:37.538Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 time=2025-10-04T05:51:37.539Z level=INFO source=sched_test.go:311 msg=c time=2025-10-04T05:51:37.539Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.539Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=cpu available="24.2 GiB" time=2025-10-04T05:51:37.539Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=cpu total="29.8 GiB" available="19.6 GiB" time=2025-10-04T05:51:37.539Z level=INFO source=sched.go:470 msg="loaded runners" count=3 time=2025-10-04T05:51:37.539Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:51:37.539Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-4a runner.inference=cpu runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/006/2761855621 runner.num_ctx=4096 time=2025-10-04T05:51:37.539Z level=INFO source=sched_test.go:329 msg=d time=2025-10-04T05:51:37.539Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.539Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:51:37.539Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:162 msg="max runners achieved, unloading one to make room" runner_count=3 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 runner.num_ctx=4096 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/002/886968237 time=2025-10-04T05:51:37.541Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:51:37.541Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="3.7 GiB" time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:51:37.541Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 runner.num_ctx=4096 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/004/1919936885 time=2025-10-04T05:51:37.547Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:51:37.547Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:51:37.547Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3c runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels244166371/008/2687499668 runner.num_ctx=4096 time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.547Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.548Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.548Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.548Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.548Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.548Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.548Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.548Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.549Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.549Z level=INFO source=sched_test.go:367 msg=a time=2025-10-04T05:51:37.549Z level=INFO source=sched_test.go:370 msg=b time=2025-10-04T05:51:37.549Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.550Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.550Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGetRunner263122430/002/3004586656 time=2025-10-04T05:51:37.550Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:37.550Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 runner.num_ctx=4096 time=2025-10-04T05:51:37.551Z level=INFO source=sched_test.go:394 msg=c time=2025-10-04T05:51:37.551Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open bad path: no such file or directory" time=2025-10-04T05:51:37.551Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.551Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 runner.num_ctx=4096 duration=2ms time=2025-10-04T05:51:37.551Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:37.553Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 runner.num_ctx=4096 time=2025-10-04T05:51:37.553Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 runner.num_ctx=4096 time=2025-10-04T05:51:37.553Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 runner.num_ctx=4096 time=2025-10-04T05:51:37.553Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 runner.num_ctx=4096 time=2025-10-04T05:51:37.553Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 runner.num_ctx=4096 time=2025-10-04T05:51:37.553Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 time=2025-10-04T05:51:37.553Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner263122430/002/3004586656 time=2025-10-04T05:51:37.553Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.601Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:51:37.601Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:51:37.601Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:51:37.621Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.621Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.622Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.622Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.622Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.622Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestPrematureExpired2862928102/002/456476355 time=2025-10-04T05:51:37.622Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:51:37.622Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 time=2025-10-04T05:51:37.623Z level=INFO source=sched_test.go:481 msg="sending premature expired event now" time=2025-10-04T05:51:37.623Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 time=2025-10-04T05:51:37.623Z level=DEBUG source=sched.go:310 msg="expired event with positive ref count, retrying" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:51:37.628Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:51:37.628Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:51:37.628Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 runner.num_ctx=4096 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 time=2025-10-04T05:51:37.633Z level=DEBUG source=sched.go:332 msg="duplicate expired event, ignoring" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired2862928102/002/456476355 time=2025-10-04T05:51:37.658Z level=ERROR source=sched.go:272 msg="finished request signal received after model unloaded" modelPath=/tmp/TestPrematureExpired2862928102/002/456476355 time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=1 library=a available="900 B" time=2025-10-04T05:51:37.664Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=1 library=a total="1000 B" available="825 B" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=2 library=a available="1.9 KiB" time=2025-10-04T05:51:37.664Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=2 library=a total="2.0 KiB" available="1.8 KiB" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=a time=2025-10-04T05:51:37.664Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=b time=2025-10-04T05:51:37.664Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.664Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:51:37.664Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:51:37.664Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:51:37.664Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:51:37.664Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:51:37.664Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:51:37.664Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:51:37.664Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:51:37.664Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:51:37.665Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:51:37.665Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:51:37.665Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:51:37.665Z level=INFO source=sched_test.go:669 msg=scenario1a time=2025-10-04T05:51:37.665Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:51:37.665Z level=DEBUG source=sched.go:142 msg="pending request cancelled or timed out, skipping scheduling" time=2025-10-04T05:51:37.670Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:51:37.670Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" PASS ok github.com/ollama/ollama/server 0.954s github.com/ollama/ollama/server/internal/cache/blob PASS ok github.com/ollama/ollama/server/internal/cache/blob 0.005s github.com/ollama/ollama/server/internal/cache/blob PASS ok github.com/ollama/ollama/server/internal/cache/blob 0.005s github.com/ollama/ollama/server/internal/client/ollama 2025/10/04 05:51:38 http: TLS handshake error from 127.0.0.1:53250: remote error: tls: bad certificate PASS ok github.com/ollama/ollama/server/internal/client/ollama 0.153s github.com/ollama/ollama/server/internal/client/ollama 2025/10/04 05:51:39 http: TLS handshake error from 127.0.0.1:54752: remote error: tls: bad certificate PASS ok github.com/ollama/ollama/server/internal/client/ollama 0.156s github.com/ollama/ollama/server/internal/internal/backoff ? github.com/ollama/ollama/server/internal/internal/backoff [no test files] github.com/ollama/ollama/server/internal/internal/names PASS ok github.com/ollama/ollama/server/internal/internal/names 0.003s github.com/ollama/ollama/server/internal/internal/names PASS ok github.com/ollama/ollama/server/internal/internal/names 0.003s github.com/ollama/ollama/server/internal/internal/stringsx PASS ok github.com/ollama/ollama/server/internal/internal/stringsx 0.003s github.com/ollama/ollama/server/internal/internal/stringsx PASS ok github.com/ollama/ollama/server/internal/internal/stringsx 0.003s github.com/ollama/ollama/server/internal/internal/syncs ? github.com/ollama/ollama/server/internal/internal/syncs [no test files] github.com/ollama/ollama/server/internal/manifest ? github.com/ollama/ollama/server/internal/manifest [no test files] github.com/ollama/ollama/server/internal/registry 2025/10/04 05:51:40 http: TLS handshake error from 127.0.0.1:58876: write tcp 127.0.0.1:36607->127.0.0.1:58876: use of closed network connection PASS ok github.com/ollama/ollama/server/internal/registry 0.009s github.com/ollama/ollama/server/internal/registry 2025/10/04 05:51:40 http: TLS handshake error from 127.0.0.1:54662: write tcp 127.0.0.1:45233->127.0.0.1:54662: use of closed network connection PASS ok github.com/ollama/ollama/server/internal/registry 0.010s github.com/ollama/ollama/server/internal/testutil ? github.com/ollama/ollama/server/internal/testutil [no test files] github.com/ollama/ollama/template PASS ok github.com/ollama/ollama/template 0.601s github.com/ollama/ollama/template PASS ok github.com/ollama/ollama/template 0.586s github.com/ollama/ollama/thinking PASS ok github.com/ollama/ollama/thinking 0.002s github.com/ollama/ollama/thinking PASS ok github.com/ollama/ollama/thinking 0.002s github.com/ollama/ollama/tools PASS ok github.com/ollama/ollama/tools 0.006s github.com/ollama/ollama/tools PASS ok github.com/ollama/ollama/tools 0.006s github.com/ollama/ollama/types/errtypes ? github.com/ollama/ollama/types/errtypes [no test files] github.com/ollama/ollama/types/model PASS ok github.com/ollama/ollama/types/model 0.004s github.com/ollama/ollama/types/model PASS ok github.com/ollama/ollama/types/model 0.004s github.com/ollama/ollama/types/syncmap ? github.com/ollama/ollama/types/syncmap [no test files] github.com/ollama/ollama/version ? github.com/ollama/ollama/version [no test files] + RPM_EC=0 ++ jobs -p + exit 0 Processing files: ollama-0.12.3-1.fc43.x86_64 Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.8nP8Gq + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + DOCDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export DOCDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/docs /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/CONTRIBUTING.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/README.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/SECURITY.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + RPM_EC=0 ++ jobs -p + exit 0 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.hSRyiH + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/vendor/modules.txt /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + RPM_EC=0 ++ jobs -p + exit 0 warning: File listed twice: /usr/share/licenses/ollama Provides: bundled(golang(github.com/agnivade/levenshtein)) = 1.1.1 bundled(golang(github.com/apache/arrow/go/arrow)) = bc21918 bundled(golang(github.com/bytedance/sonic)) = 1.11.6 bundled(golang(github.com/bytedance/sonic/loader)) = 0.1.1 bundled(golang(github.com/chewxy/hm)) = 1.0.0 bundled(golang(github.com/chewxy/math32)) = 1.11.0 bundled(golang(github.com/cloudwego/base64x)) = 0.1.4 bundled(golang(github.com/cloudwego/iasm)) = 0.2.0 bundled(golang(github.com/containerd/console)) = 1.0.3 bundled(golang(github.com/d4l3k/go-bfloat16)) = 690c3bd bundled(golang(github.com/davecgh/go-spew)) = 1.1.1 bundled(golang(github.com/dlclark/regexp2)) = 1.11.4 bundled(golang(github.com/emirpasic/gods/v2)) = 2.0.0_alpha bundled(golang(github.com/gabriel-vasile/mimetype)) = 1.4.3 bundled(golang(github.com/gin-contrib/cors)) = 1.7.2 bundled(golang(github.com/gin-contrib/sse)) = 0.1.0 bundled(golang(github.com/gin-gonic/gin)) = 1.10.0 bundled(golang(github.com/go-playground/locales)) = 0.14.1 bundled(golang(github.com/go-playground/universal-translator)) = 0.18.1 bundled(golang(github.com/go-playground/validator/v10)) = 10.20.0 bundled(golang(github.com/goccy/go-json)) = 0.10.2 bundled(golang(github.com/gogo/protobuf)) = 1.3.2 bundled(golang(github.com/golang/protobuf)) = 1.5.4 bundled(golang(github.com/google/flatbuffers)) = 24.3.25+incompatible bundled(golang(github.com/google/go-cmp)) = 0.7.0 bundled(golang(github.com/google/uuid)) = 1.6.0 bundled(golang(github.com/inconshreveable/mousetrap)) = 1.1.0 bundled(golang(github.com/json-iterator/go)) = 1.1.12 bundled(golang(github.com/klauspost/cpuid/v2)) = 2.2.7 bundled(golang(github.com/kr/text)) = 0.2.0 bundled(golang(github.com/leodido/go-urn)) = 1.4.0 bundled(golang(github.com/mattn/go-isatty)) = 0.0.20 bundled(golang(github.com/mattn/go-runewidth)) = 0.0.14 bundled(golang(github.com/modern-go/concurrent)) = bacd9c7 bundled(golang(github.com/modern-go/reflect2)) = 1.0.2 bundled(golang(github.com/nlpodyssey/gopickle)) = 0.3.0 bundled(golang(github.com/olekukonko/tablewriter)) = 0.0.5 bundled(golang(github.com/pdevine/tensor)) = f88f456 bundled(golang(github.com/pelletier/go-toml/v2)) = 2.2.2 bundled(golang(github.com/pkg/errors)) = 0.9.1 bundled(golang(github.com/pmezard/go-difflib)) = 1.0.0 bundled(golang(github.com/rivo/uniseg)) = 0.2.0 bundled(golang(github.com/spf13/cobra)) = 1.7.0 bundled(golang(github.com/spf13/pflag)) = 1.0.5 bundled(golang(github.com/stretchr/testify)) = 1.9.0 bundled(golang(github.com/twitchyliquid64/golang-asm)) = 0.15.1 bundled(golang(github.com/ugorji/go/codec)) = 1.2.12 bundled(golang(github.com/x448/float16)) = 0.8.4 bundled(golang(github.com/xtgo/set)) = 1.0.0 bundled(golang(go4.org/unsafe/assume-no-moving-gc)) = b99613f bundled(golang(golang.org/x/arch)) = 0.8.0 bundled(golang(golang.org/x/crypto)) = 0.36.0 bundled(golang(golang.org/x/exp)) = aa4b98e bundled(golang(golang.org/x/image)) = 0.22.0 bundled(golang(golang.org/x/net)) = 0.38.0 bundled(golang(golang.org/x/sync)) = 0.12.0 bundled(golang(golang.org/x/sys)) = 0.31.0 bundled(golang(golang.org/x/term)) = 0.30.0 bundled(golang(golang.org/x/text)) = 0.23.0 bundled(golang(golang.org/x/tools)) = 0.30.0 bundled(golang(golang.org/x/xerrors)) = 5ec99f8 bundled(golang(gonum.org/v1/gonum)) = 0.15.0 bundled(golang(google.golang.org/protobuf)) = 1.34.1 bundled(golang(gopkg.in/yaml.v3)) = 3.0.1 bundled(golang(gorgonia.org/vecf32)) = 0.9.0 bundled(golang(gorgonia.org/vecf64)) = 0.9.0 bundled(llama-cpp) = b6121 config(ollama) = 0.12.3-1.fc43 group(ollama) group(ollama) = ZyBvbGxhbWEgLSAt ollama = 0.12.3-1.fc43 ollama(x86-64) = 0.12.3-1.fc43 user(ollama) = dSBvbGxhbWEgLSAiT2xsYW1hIiAvdmFyL2xpYi9vbGxhbWEgLQAA Requires(interp): /bin/sh /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(pre): /bin/sh group(ollama) user(ollama) Requires(post): /bin/sh Requires(preun): /bin/sh Requires(postun): /bin/sh group(ollama) user(ollama) Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libresolv.so.2()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.11)(64bit) libstdc++.so.6(CXXABI_1.3.13)(64bit) libstdc++.so.6(CXXABI_1.3.15)(64bit) libstdc++.so.6(CXXABI_1.3.2)(64bit) libstdc++.so.6(CXXABI_1.3.3)(64bit) libstdc++.so.6(CXXABI_1.3.5)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.14)(64bit) libstdc++.so.6(GLIBCXX_3.4.15)(64bit) libstdc++.so.6(GLIBCXX_3.4.17)(64bit) libstdc++.so.6(GLIBCXX_3.4.18)(64bit) libstdc++.so.6(GLIBCXX_3.4.19)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) libstdc++.so.6(GLIBCXX_3.4.22)(64bit) libstdc++.so.6(GLIBCXX_3.4.25)(64bit) libstdc++.so.6(GLIBCXX_3.4.26)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) libstdc++.so.6(GLIBCXX_3.4.32)(64bit) libstdc++.so.6(GLIBCXX_3.4.9)(64bit) rtld(GNU_HASH) Recommends: ollama-ggml Processing files: ollama-ggml-0.12.3-1.fc43.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.EsyMfL + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml = 0.12.3-1.fc43 ollama-ggml(x86-64) = 0.12.3-1.fc43 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) rtld(GNU_HASH) Processing files: ollama-ggml-cpu-0.12.3-1.fc43.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.u6eK43 + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml-cpu = 0.12.3-1.fc43 ollama-ggml-cpu(x86-64) = 0.12.3-1.fc43 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) rtld(GNU_HASH) Supplements: ollama-ggml(x86-64) Processing files: ollama-ggml-rocm-0.12.3-1.fc43.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.GhrkYA + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-rocm + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-rocm + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-rocm + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml-rocm = 0.12.3-1.fc43 ollama-ggml-rocm(x86-64) = 0.12.3-1.fc43 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libamdhip64.so.6()(64bit) libamdhip64.so.6(hip_4.2)(64bit) libamdhip64.so.6(hip_6.0)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libhipblas.so.2()(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) librocblas.so.4()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) Supplements: if ollama-ggml(x86-64) rocm-hip(x86-64) Processing files: ollama-debugsource-0.12.3-1.fc43.x86_64 Provides: ollama-debugsource = 0.12.3-1.fc43 ollama-debugsource(x86-64) = 0.12.3-1.fc43 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Processing files: ollama-debuginfo-0.12.3-1.fc43.x86_64 Provides: debuginfo(build-id) = bc3cdfdb2e991e2b47e2f927fd94c0dd35fb72e6 ollama-debuginfo = 0.12.3-1.fc43 ollama-debuginfo(x86-64) = 0.12.3-1.fc43 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc43 Processing files: ollama-ggml-debuginfo-0.12.3-1.fc43.x86_64 Provides: debuginfo(build-id) = 52d52d80a3ec766959d4da759bf491bc73e7277f libggml-base.so-0.12.3-1.fc43.x86_64.debug()(64bit) ollama-ggml-debuginfo = 0.12.3-1.fc43 ollama-ggml-debuginfo(x86-64) = 0.12.3-1.fc43 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc43 Processing files: ollama-ggml-cpu-debuginfo-0.12.3-1.fc43.x86_64 Provides: debuginfo(build-id) = 19cad8b8c29acb3b19bd4e8694da5e0bdb02a0eb debuginfo(build-id) = 284b39e50aeda8509a8a2c65e899daa2184bff5f debuginfo(build-id) = 4570ec977dc3bc9c9b84347e8cff5e94058dd87f debuginfo(build-id) = 464c0a5f57a25bcd27e1e46477d427ca1f4321d5 debuginfo(build-id) = a290b9e0c298e4773087162fb6905a9564d8f740 debuginfo(build-id) = f6dd240da6af419c83548fc0f2816b71be54830c debuginfo(build-id) = f8209655d1fdac2c790c3d5a016127c47dc3dff9 libggml-cpu-alderlake.so-0.12.3-1.fc43.x86_64.debug()(64bit) libggml-cpu-haswell.so-0.12.3-1.fc43.x86_64.debug()(64bit) libggml-cpu-icelake.so-0.12.3-1.fc43.x86_64.debug()(64bit) libggml-cpu-sandybridge.so-0.12.3-1.fc43.x86_64.debug()(64bit) libggml-cpu-skylakex.so-0.12.3-1.fc43.x86_64.debug()(64bit) libggml-cpu-sse42.so-0.12.3-1.fc43.x86_64.debug()(64bit) libggml-cpu-x64.so-0.12.3-1.fc43.x86_64.debug()(64bit) ollama-ggml-cpu-debuginfo = 0.12.3-1.fc43 ollama-ggml-cpu-debuginfo(x86-64) = 0.12.3-1.fc43 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc43 Processing files: ollama-ggml-rocm-debuginfo-0.12.3-1.fc43.x86_64 Provides: debuginfo(build-id) = d4be82ae2027adccf3ad25d13ccf05560bdea836 libggml-hip.so-0.12.3-1.fc43.x86_64.debug()(64bit) ollama-ggml-rocm-debuginfo = 0.12.3-1.fc43 ollama-ggml-rocm-debuginfo(x86-64) = 0.12.3-1.fc43 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc43 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc43.src.rpm Wrote: /builddir/build/RPMS/ollama-debugsource-0.12.3-1.fc43.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-0.12.3-1.fc43.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-cpu-0.12.3-1.fc43.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-debuginfo-0.12.3-1.fc43.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-rocm-debuginfo-0.12.3-1.fc43.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-0.12.3-1.fc43.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-cpu-debuginfo-0.12.3-1.fc43.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-debuginfo-0.12.3-1.fc43.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-rocm-0.12.3-1.fc43.x86_64.rpm RPM build warnings: File listed twice: /usr/share/licenses/ollama Finish: rpmbuild ollama-0.12.3-1.fc43.src.rpm Finish: build phase for ollama-0.12.3-1.fc43.src.rpm INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-43-x86_64-1759552642.548045/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names INFO: Done(/var/lib/copr-rpmbuild/results/ollama-0.12.3-1.fc43.src.rpm) Config(child) 75 minutes 31 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot Finish: run Running RPMResults tool Package info: { "packages": [ { "name": "ollama-ggml", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "x86_64" }, { "name": "ollama-ggml-rocm-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "x86_64" }, { "name": "ollama-ggml-rocm", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "x86_64" }, { "name": "ollama-ggml-cpu-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "src" }, { "name": "ollama-debugsource", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "x86_64" }, { "name": "ollama-ggml-cpu", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "x86_64" }, { "name": "ollama-ggml-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "x86_64" }, { "name": "ollama-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc43", "arch": "x86_64" } ] } RPMResults finished