Warning: Permanently added '3.215.175.60' (ED25519) to the list of known hosts. You can reproduce this build on your computer by running: sudo dnf install copr-rpmbuild /usr/bin/copr-rpmbuild --verbose --drop-resultdir --task-url https://copr.fedorainfracloud.org/backend/get-build-task/9644096-fedora-42-x86_64 --chroot fedora-42-x86_64 Version: 1.6 PID: 8678 Logging PID: 8680 Task: {'allow_user_ssh': False, 'appstream': False, 'background': False, 'build_id': 9644096, 'buildroot_pkgs': [], 'chroot': 'fedora-42-x86_64', 'enable_net': False, 'fedora_review': False, 'git_hash': 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd', 'git_repo': 'https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama', 'isolation': 'default', 'memory_reqs': 2048, 'package_name': 'ollama', 'package_version': '0.12.3-1', 'project_dirname': 'ollama', 'project_name': 'ollama', 'project_owner': 'fachep', 'repo_priority': None, 'repos': [{'baseurl': 'https://download.copr.fedorainfracloud.org/results/fachep/ollama/fedora-42-x86_64/', 'id': 'copr_base', 'name': 'Copr repository', 'priority': None}, {'baseurl': 'https://developer.download.nvidia.cn/compute/cuda/repos/fedora42/x86_64/', 'id': 'https_developer_download_nvidia_cn_compute_cuda_repos_fedora42_x86_64', 'name': 'Additional repo https_developer_download_nvidia_cn_compute_cuda_repos_fedora42_x86_64'}, {'baseurl': 'https://developer.download.nvidia.cn/compute/cuda/repos/fedora41/x86_64/', 'id': 'https_developer_download_nvidia_cn_compute_cuda_repos_fedora41_x86_64', 'name': 'Additional repo https_developer_download_nvidia_cn_compute_cuda_repos_fedora41_x86_64'}], 'sandbox': 'fachep/ollama--fachep', 'source_json': {}, 'source_type': None, 'ssh_public_keys': None, 'storage': 0, 'submitter': 'fachep', 'tags': [], 'task_id': '9644096-fedora-42-x86_64', 'timeout': 18000, 'uses_devel_repo': False, 'with_opts': [], 'without_opts': []} Running: git clone https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama /var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama --depth 500 --no-single-branch --recursive cmd: ['git', 'clone', 'https://copr-dist-git.fedorainfracloud.org/git/fachep/ollama/ollama', '/var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama', '--depth', '500', '--no-single-branch', '--recursive'] cwd: . rc: 0 stdout: stderr: Cloning into '/var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama'... Running: git checkout bd90d2d0f4106e3a74de46dced869f2b79bfddfd -- cmd: ['git', 'checkout', 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd', '--'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama rc: 0 stdout: stderr: Note: switching to 'bd90d2d0f4106e3a74de46dced869f2b79bfddfd'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at bd90d2d automatic import of ollama Running: dist-git-client sources cmd: ['dist-git-client', 'sources'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama rc: 0 stdout: stderr: INFO: Reading stdout from command: git rev-parse --abbrev-ref HEAD INFO: Reading stdout from command: git rev-parse HEAD INFO: Reading sources specification file: sources INFO: Downloading ollama-0.12.3.tar.gz INFO: Reading stdout from command: curl --help all INFO: Calling: curl -H Pragma: -o ollama-0.12.3.tar.gz --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/fachep/ollama/ollama/ollama-0.12.3.tar.gz/md5/f096acee5e82596e9afd4d07ed477de2/ollama-0.12.3.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.5M 100 10.5M 0 0 272M 0 --:--:-- --:--:-- --:--:-- 276M INFO: Reading stdout from command: md5sum ollama-0.12.3.tar.gz INFO: Downloading vendor.tar.bz2 INFO: Calling: curl -H Pragma: -o vendor.tar.bz2 --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/fachep/ollama/ollama/vendor.tar.bz2/md5/c608d605610ed47b385cf54a6f6b2a2c/vendor.tar.bz2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6402k 100 6402k 0 0 217M 0 --:--:-- --:--:-- --:--:-- 223M INFO: Reading stdout from command: md5sum vendor.tar.bz2 tail: /var/lib/copr-rpmbuild/main.log: file truncated Running (timeout=18000): unbuffer mock --spec /var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1759552642.867825 -r /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: mock.py version 6.3 starting (python version = 3.13.7, NVR = mock-6.3-1.fc42), args: /usr/libexec/mock/mock --spec /var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1759552642.867825 -r /var/lib/copr-rpmbuild/results/configs/child.cfg Start(bootstrap): init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish(bootstrap): init plugins Start: init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish: init plugins INFO: Signal handler active Start: run INFO: Start(/var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama/ollama.spec) Config(fedora-42-x86_64) Start: clean chroot Finish: clean chroot Mock Version: 6.3 INFO: Mock Version: 6.3 Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1759552642.867825/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata INFO: Guessed host environment type: unknown INFO: Using container image: registry.fedoraproject.org/fedora:42 INFO: Pulling image: registry.fedoraproject.org/fedora:42 INFO: Tagging container image as mock-bootstrap-1b0b7a45-3d36-4b16-9cee-5653c71c3130 INFO: Checking that 558cfe7d333090d2bcc9797d6d46aded16cd8194b870aa7551693ce98b0f4010 image matches host's architecture INFO: Copy content of container 558cfe7d333090d2bcc9797d6d46aded16cd8194b870aa7551693ce98b0f4010 to /var/lib/mock/fedora-42-x86_64-bootstrap-1759552642.867825/root INFO: mounting 558cfe7d333090d2bcc9797d6d46aded16cd8194b870aa7551693ce98b0f4010 with podman image mount INFO: image 558cfe7d333090d2bcc9797d6d46aded16cd8194b870aa7551693ce98b0f4010 as /var/lib/containers/storage/overlay/3051d623164d682cf6caf5e11dd6f09dac015f70b3405b90dde03f3a26ae8bb1/merged INFO: umounting image 558cfe7d333090d2bcc9797d6d46aded16cd8194b870aa7551693ce98b0f4010 (/var/lib/containers/storage/overlay/3051d623164d682cf6caf5e11dd6f09dac015f70b3405b90dde03f3a26ae8bb1/merged) with podman image umount INFO: Removing image mock-bootstrap-1b0b7a45-3d36-4b16-9cee-5653c71c3130 INFO: Package manager dnf5 detected and used (fallback) INFO: Not updating bootstrap chroot, bootstrap_image_ready=True Start(bootstrap): creating root cache Finish(bootstrap): creating root cache Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-1759552642.867825/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Package manager dnf5 detected and used (direct choice) INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 dnf5-5.2.16.0-1.fc42.x86_64 dnf5-plugins-5.2.16.0-1.fc42.x86_64 Start: installing minimal buildroot with dnf5 Updating and loading repositories: Copr repository 100% | 5.1 KiB/s | 1.6 KiB | 00m00s Additional repo https_developer_downlo 100% | 54.9 KiB/s | 47.8 KiB | 00m01s Additional repo https_developer_downlo 100% | 125.4 KiB/s | 109.0 KiB | 00m01s fedora 100% | 50.3 MiB/s | 69.4 MiB | 00m01s updates 100% | 7.2 MiB/s | 9.1 MiB | 00m01s Repositories loaded. Package Arch Version Repository Size Installing group/module packages: bash x86_64 5.2.37-1.fc42 fedora 8.2 MiB bzip2 x86_64 1.0.8-20.fc42 fedora 99.3 KiB coreutils x86_64 9.6-6.fc42 updates 5.4 MiB cpio x86_64 2.15-4.fc42 fedora 1.1 MiB diffutils x86_64 3.12-1.fc42 updates 1.6 MiB fedora-release-common noarch 42-30 updates 20.2 KiB findutils x86_64 1:4.10.0-5.fc42 fedora 1.9 MiB gawk x86_64 5.3.1-1.fc42 fedora 1.7 MiB glibc-minimal-langpack x86_64 2.41-11.fc42 updates 0.0 B grep x86_64 3.11-10.fc42 fedora 1.0 MiB gzip x86_64 1.13-3.fc42 fedora 392.9 KiB info x86_64 7.2-3.fc42 fedora 357.9 KiB patch x86_64 2.8-1.fc42 updates 222.8 KiB redhat-rpm-config noarch 342-4.fc42 updates 185.5 KiB rpm-build x86_64 4.20.1-1.fc42 fedora 168.7 KiB sed x86_64 4.9-4.fc42 fedora 857.3 KiB shadow-utils x86_64 2:4.17.4-1.fc42 fedora 4.0 MiB tar x86_64 2:1.35-5.fc42 fedora 3.0 MiB unzip x86_64 6.0-66.fc42 fedora 390.3 KiB util-linux x86_64 2.40.4-7.fc42 fedora 3.4 MiB which x86_64 2.23-2.fc42 updates 83.5 KiB xz x86_64 1:5.8.1-2.fc42 updates 1.3 MiB Installing dependencies: add-determinism x86_64 0.6.0-1.fc42 fedora 2.5 MiB alternatives x86_64 1.33-1.fc42 updates 62.2 KiB ansible-srpm-macros noarch 1-17.1.fc42 fedora 35.7 KiB audit-libs x86_64 4.1.1-1.fc42 updates 378.8 KiB basesystem noarch 11-22.fc42 fedora 0.0 B binutils x86_64 2.44-6.fc42 updates 25.8 MiB build-reproducibility-srpm-macros noarch 0.6.0-1.fc42 fedora 735.0 B bzip2-libs x86_64 1.0.8-20.fc42 fedora 84.6 KiB ca-certificates noarch 2025.2.80_v9.0.304-1.0.fc42 updates 2.7 MiB coreutils-common x86_64 9.6-6.fc42 updates 11.1 MiB crypto-policies noarch 20250707-1.gitad370a8.fc42 updates 142.9 KiB curl x86_64 8.11.1-6.fc42 updates 450.6 KiB cyrus-sasl-lib x86_64 2.1.28-30.fc42 fedora 2.3 MiB debugedit x86_64 5.1-7.fc42 updates 192.7 KiB dwz x86_64 0.16-1.fc42 updates 287.1 KiB ed x86_64 1.21-2.fc42 fedora 146.5 KiB efi-srpm-macros noarch 6-3.fc42 updates 40.1 KiB elfutils x86_64 0.193-2.fc42 updates 2.9 MiB elfutils-debuginfod-client x86_64 0.193-2.fc42 updates 83.9 KiB elfutils-default-yama-scope noarch 0.193-2.fc42 updates 1.8 KiB elfutils-libelf x86_64 0.193-2.fc42 updates 1.2 MiB elfutils-libs x86_64 0.193-2.fc42 updates 683.4 KiB fedora-gpg-keys noarch 42-1 fedora 128.2 KiB fedora-release noarch 42-30 updates 0.0 B fedora-release-identity-basic noarch 42-30 updates 646.0 B fedora-repos noarch 42-1 fedora 4.9 KiB file x86_64 5.46-3.fc42 updates 100.2 KiB file-libs x86_64 5.46-3.fc42 updates 11.9 MiB filesystem x86_64 3.18-47.fc42 updates 112.0 B filesystem-srpm-macros noarch 3.18-47.fc42 updates 38.2 KiB fonts-srpm-macros noarch 1:2.0.5-22.fc42 updates 55.8 KiB forge-srpm-macros noarch 0.4.0-2.fc42 fedora 38.9 KiB fpc-srpm-macros noarch 1.3-14.fc42 fedora 144.0 B gdb-minimal x86_64 16.3-1.fc42 updates 13.2 MiB gdbm-libs x86_64 1:1.23-9.fc42 fedora 129.9 KiB ghc-srpm-macros noarch 1.9.2-2.fc42 fedora 779.0 B glibc x86_64 2.41-11.fc42 updates 6.6 MiB glibc-common x86_64 2.41-11.fc42 updates 1.0 MiB glibc-gconv-extra x86_64 2.41-11.fc42 updates 7.2 MiB gmp x86_64 1:6.3.0-4.fc42 fedora 811.3 KiB gnat-srpm-macros noarch 6-7.fc42 fedora 1.0 KiB gnulib-l10n noarch 20241231-1.fc42 updates 655.0 KiB go-srpm-macros noarch 3.8.0-1.fc42 updates 61.9 KiB jansson x86_64 2.14-2.fc42 fedora 93.1 KiB json-c x86_64 0.18-2.fc42 fedora 86.7 KiB kernel-srpm-macros noarch 1.0-25.fc42 fedora 1.9 KiB keyutils-libs x86_64 1.6.3-5.fc42 fedora 58.3 KiB krb5-libs x86_64 1.21.3-6.fc42 updates 2.3 MiB libacl x86_64 2.3.2-3.fc42 fedora 38.3 KiB libarchive x86_64 3.8.1-1.fc42 updates 955.2 KiB libattr x86_64 2.5.2-5.fc42 fedora 27.1 KiB libblkid x86_64 2.40.4-7.fc42 fedora 262.4 KiB libbrotli x86_64 1.1.0-6.fc42 fedora 841.3 KiB libcap x86_64 2.73-2.fc42 fedora 207.1 KiB libcap-ng x86_64 0.8.5-4.fc42 fedora 72.9 KiB libcom_err x86_64 1.47.2-3.fc42 fedora 67.1 KiB libcurl x86_64 8.11.1-6.fc42 updates 834.1 KiB libeconf x86_64 0.7.6-2.fc42 updates 64.6 KiB libevent x86_64 2.1.12-15.fc42 fedora 903.1 KiB libfdisk x86_64 2.40.4-7.fc42 fedora 372.3 KiB libffi x86_64 3.4.6-5.fc42 fedora 82.3 KiB libgcc x86_64 15.2.1-1.fc42 updates 266.6 KiB libgomp x86_64 15.2.1-1.fc42 updates 541.1 KiB libidn2 x86_64 2.3.8-1.fc42 fedora 556.5 KiB libmount x86_64 2.40.4-7.fc42 fedora 356.3 KiB libnghttp2 x86_64 1.64.0-3.fc42 fedora 170.4 KiB libpkgconf x86_64 2.3.0-2.fc42 fedora 78.1 KiB libpsl x86_64 0.21.5-5.fc42 fedora 76.4 KiB libselinux x86_64 3.8-3.fc42 updates 193.1 KiB libsemanage x86_64 3.8.1-2.fc42 updates 304.4 KiB libsepol x86_64 3.8-1.fc42 fedora 826.0 KiB libsmartcols x86_64 2.40.4-7.fc42 fedora 180.4 KiB libssh x86_64 0.11.3-1.fc42 updates 567.1 KiB libssh-config noarch 0.11.3-1.fc42 updates 277.0 B libstdc++ x86_64 15.2.1-1.fc42 updates 2.8 MiB libtasn1 x86_64 4.20.0-1.fc42 fedora 176.3 KiB libtool-ltdl x86_64 2.5.4-4.fc42 fedora 70.1 KiB libunistring x86_64 1.1-9.fc42 fedora 1.7 MiB libuuid x86_64 2.40.4-7.fc42 fedora 37.3 KiB libverto x86_64 0.3.2-10.fc42 fedora 25.4 KiB libxcrypt x86_64 4.4.38-7.fc42 updates 284.5 KiB libxml2 x86_64 2.12.10-1.fc42 fedora 1.7 MiB libzstd x86_64 1.5.7-1.fc42 fedora 807.8 KiB lua-libs x86_64 5.4.8-1.fc42 updates 280.8 KiB lua-srpm-macros noarch 1-15.fc42 fedora 1.3 KiB lz4-libs x86_64 1.10.0-2.fc42 fedora 157.4 KiB mpfr x86_64 4.2.2-1.fc42 fedora 828.8 KiB ncurses-base noarch 6.5-5.20250125.fc42 fedora 326.8 KiB ncurses-libs x86_64 6.5-5.20250125.fc42 fedora 946.3 KiB ocaml-srpm-macros noarch 10-4.fc42 fedora 1.9 KiB openblas-srpm-macros noarch 2-19.fc42 fedora 112.0 B openldap x86_64 2.6.10-1.fc42 updates 655.8 KiB openssl-libs x86_64 1:3.2.4-4.fc42 updates 7.8 MiB p11-kit x86_64 0.25.8-1.fc42 updates 2.3 MiB p11-kit-trust x86_64 0.25.8-1.fc42 updates 446.5 KiB package-notes-srpm-macros noarch 0.5-13.fc42 fedora 1.6 KiB pam-libs x86_64 1.7.0-6.fc42 updates 126.7 KiB pcre2 x86_64 10.45-1.fc42 fedora 697.7 KiB pcre2-syntax noarch 10.45-1.fc42 fedora 273.9 KiB perl-srpm-macros noarch 1-57.fc42 fedora 861.0 B pkgconf x86_64 2.3.0-2.fc42 fedora 88.5 KiB pkgconf-m4 noarch 2.3.0-2.fc42 fedora 14.4 KiB pkgconf-pkg-config x86_64 2.3.0-2.fc42 fedora 989.0 B popt x86_64 1.19-8.fc42 fedora 132.8 KiB publicsuffix-list-dafsa noarch 20250616-1.fc42 updates 69.1 KiB pyproject-srpm-macros noarch 1.18.4-1.fc42 updates 1.9 KiB python-srpm-macros noarch 3.13-5.fc42 updates 51.0 KiB qt5-srpm-macros noarch 5.15.17-1.fc42 updates 500.0 B qt6-srpm-macros noarch 6.9.2-1.fc42 updates 464.0 B readline x86_64 8.2-13.fc42 fedora 485.0 KiB rpm x86_64 4.20.1-1.fc42 fedora 3.1 MiB rpm-build-libs x86_64 4.20.1-1.fc42 fedora 206.6 KiB rpm-libs x86_64 4.20.1-1.fc42 fedora 721.8 KiB rpm-sequoia x86_64 1.7.0-5.fc42 fedora 2.4 MiB rust-srpm-macros noarch 26.4-1.fc42 updates 4.8 KiB setup noarch 2.15.0-13.fc42 fedora 720.9 KiB sqlite-libs x86_64 3.47.2-5.fc42 updates 1.5 MiB systemd-libs x86_64 257.9-2.fc42 updates 2.2 MiB systemd-standalone-sysusers x86_64 257.9-2.fc42 updates 277.3 KiB tree-sitter-srpm-macros noarch 0.1.0-8.fc42 fedora 6.5 KiB util-linux-core x86_64 2.40.4-7.fc42 fedora 1.4 MiB xxhash-libs x86_64 0.8.3-2.fc42 fedora 90.2 KiB xz-libs x86_64 1:5.8.1-2.fc42 updates 217.8 KiB zig-srpm-macros noarch 1-4.fc42 fedora 1.1 KiB zip x86_64 3.0-43.fc42 fedora 698.5 KiB zlib-ng-compat x86_64 2.2.5-2.fc42 updates 137.6 KiB zstd x86_64 1.5.7-1.fc42 fedora 1.7 MiB Installing groups: Buildsystem building group Transaction Summary: Installing: 149 packages Total size of inbound packages is 52 MiB. Need to download 52 MiB. After this operation, 178 MiB extra will be used (install 178 MiB, remove 0 B). [ 1/149] bzip2-0:1.0.8-20.fc42.x86_64 100% | 4.6 MiB/s | 52.1 KiB | 00m00s [ 2/149] findutils-1:4.10.0-5.fc42.x86 100% | 49.0 MiB/s | 551.5 KiB | 00m00s [ 3/149] bash-0:5.2.37-1.fc42.x86_64 100% | 72.3 MiB/s | 1.8 MiB | 00m00s [ 4/149] grep-0:3.11-10.fc42.x86_64 100% | 73.3 MiB/s | 300.1 KiB | 00m00s [ 5/149] gzip-0:1.13-3.fc42.x86_64 100% | 33.3 MiB/s | 170.4 KiB | 00m00s [ 6/149] info-0:7.2-3.fc42.x86_64 100% | 35.9 MiB/s | 183.8 KiB | 00m00s [ 7/149] rpm-build-0:4.20.1-1.fc42.x86 100% | 20.0 MiB/s | 81.8 KiB | 00m00s [ 8/149] sed-0:4.9-4.fc42.x86_64 100% | 103.3 MiB/s | 317.3 KiB | 00m00s [ 9/149] tar-2:1.35-5.fc42.x86_64 100% | 120.3 MiB/s | 862.5 KiB | 00m00s [ 10/149] shadow-utils-2:4.17.4-1.fc42. 100% | 132.2 MiB/s | 1.3 MiB | 00m00s [ 11/149] unzip-0:6.0-66.fc42.x86_64 100% | 60.1 MiB/s | 184.6 KiB | 00m00s [ 12/149] diffutils-0:3.12-1.fc42.x86_6 100% | 127.8 MiB/s | 392.6 KiB | 00m00s [ 13/149] coreutils-0:9.6-6.fc42.x86_64 100% | 162.6 MiB/s | 1.1 MiB | 00m00s [ 14/149] fedora-release-common-0:42-30 100% | 12.0 MiB/s | 24.5 KiB | 00m00s [ 15/149] glibc-minimal-langpack-0:2.41 100% | 32.1 MiB/s | 98.7 KiB | 00m00s [ 16/149] gawk-0:5.3.1-1.fc42.x86_64 100% | 215.8 MiB/s | 1.1 MiB | 00m00s [ 17/149] patch-0:2.8-1.fc42.x86_64 100% | 36.9 MiB/s | 113.5 KiB | 00m00s [ 18/149] redhat-rpm-config-0:342-4.fc4 100% | 26.4 MiB/s | 81.1 KiB | 00m00s [ 19/149] util-linux-0:2.40.4-7.fc42.x8 100% | 230.9 MiB/s | 1.2 MiB | 00m00s [ 20/149] which-0:2.23-2.fc42.x86_64 100% | 13.6 MiB/s | 41.7 KiB | 00m00s [ 21/149] xz-1:5.8.1-2.fc42.x86_64 100% | 139.8 MiB/s | 572.6 KiB | 00m00s [ 22/149] ncurses-libs-0:6.5-5.20250125 100% | 65.4 MiB/s | 335.0 KiB | 00m00s [ 23/149] bzip2-libs-0:1.0.8-20.fc42.x8 100% | 14.2 MiB/s | 43.6 KiB | 00m00s [ 24/149] cpio-0:2.15-4.fc42.x86_64 100% | 3.8 MiB/s | 294.6 KiB | 00m00s [ 25/149] pcre2-0:10.45-1.fc42.x86_64 100% | 51.3 MiB/s | 262.8 KiB | 00m00s [ 26/149] popt-0:1.19-8.fc42.x86_64 100% | 16.1 MiB/s | 65.9 KiB | 00m00s [ 27/149] readline-0:8.2-13.fc42.x86_64 100% | 52.5 MiB/s | 215.2 KiB | 00m00s [ 28/149] rpm-0:4.20.1-1.fc42.x86_64 100% | 133.9 MiB/s | 548.4 KiB | 00m00s [ 29/149] rpm-build-libs-0:4.20.1-1.fc4 100% | 24.3 MiB/s | 99.7 KiB | 00m00s [ 30/149] rpm-libs-0:4.20.1-1.fc42.x86_ 100% | 60.9 MiB/s | 312.0 KiB | 00m00s [ 31/149] zstd-0:1.5.7-1.fc42.x86_64 100% | 118.6 MiB/s | 485.9 KiB | 00m00s [ 32/149] libacl-0:2.3.2-3.fc42.x86_64 100% | 7.5 MiB/s | 23.0 KiB | 00m00s [ 33/149] setup-0:2.15.0-13.fc42.noarch 100% | 50.7 MiB/s | 155.8 KiB | 00m00s [ 34/149] libattr-0:2.5.2-5.fc42.x86_64 100% | 4.2 MiB/s | 17.1 KiB | 00m00s [ 35/149] gmp-1:6.3.0-4.fc42.x86_64 100% | 62.1 MiB/s | 317.7 KiB | 00m00s [ 36/149] libcap-0:2.73-2.fc42.x86_64 100% | 27.4 MiB/s | 84.3 KiB | 00m00s [ 37/149] fedora-repos-0:42-1.noarch 100% | 2.3 MiB/s | 9.2 KiB | 00m00s [ 38/149] mpfr-0:4.2.2-1.fc42.x86_64 100% | 56.2 MiB/s | 345.3 KiB | 00m00s [ 39/149] coreutils-common-0:9.6-6.fc42 100% | 189.4 MiB/s | 2.1 MiB | 00m00s [ 40/149] glibc-common-0:2.41-11.fc42.x 100% | 53.8 MiB/s | 385.6 KiB | 00m00s [ 41/149] ed-0:1.21-2.fc42.x86_64 100% | 13.3 MiB/s | 82.0 KiB | 00m00s [ 42/149] ansible-srpm-macros-0:1-17.1. 100% | 6.6 MiB/s | 20.3 KiB | 00m00s [ 43/149] build-reproducibility-srpm-ma 100% | 3.8 MiB/s | 11.7 KiB | 00m00s [ 44/149] fpc-srpm-macros-0:1.3-14.fc42 100% | 3.9 MiB/s | 8.0 KiB | 00m00s [ 45/149] forge-srpm-macros-0:0.4.0-2.f 100% | 9.7 MiB/s | 19.9 KiB | 00m00s [ 46/149] ghc-srpm-macros-0:1.9.2-2.fc4 100% | 4.5 MiB/s | 9.2 KiB | 00m00s [ 47/149] gnat-srpm-macros-0:6-7.fc42.n 100% | 2.8 MiB/s | 8.6 KiB | 00m00s [ 48/149] kernel-srpm-macros-0:1.0-25.f 100% | 3.2 MiB/s | 9.9 KiB | 00m00s [ 49/149] ocaml-srpm-macros-0:10-4.fc42 100% | 3.0 MiB/s | 9.2 KiB | 00m00s [ 50/149] openblas-srpm-macros-0:2-19.f 100% | 2.5 MiB/s | 7.8 KiB | 00m00s [ 51/149] lua-srpm-macros-0:1-15.fc42.n 100% | 1.7 MiB/s | 8.9 KiB | 00m00s [ 52/149] package-notes-srpm-macros-0:0 100% | 4.5 MiB/s | 9.3 KiB | 00m00s [ 53/149] perl-srpm-macros-0:1-57.fc42. 100% | 4.2 MiB/s | 8.5 KiB | 00m00s [ 54/149] tree-sitter-srpm-macros-0:0.1 100% | 5.5 MiB/s | 11.2 KiB | 00m00s [ 55/149] zig-srpm-macros-0:1-4.fc42.no 100% | 2.7 MiB/s | 8.2 KiB | 00m00s [ 56/149] zip-0:3.0-43.fc42.x86_64 100% | 51.5 MiB/s | 263.5 KiB | 00m00s [ 57/149] libblkid-0:2.40.4-7.fc42.x86_ 100% | 23.9 MiB/s | 122.5 KiB | 00m00s [ 58/149] libcap-ng-0:0.8.5-4.fc42.x86_ 100% | 10.5 MiB/s | 32.2 KiB | 00m00s [ 59/149] libfdisk-0:2.40.4-7.fc42.x86_ 100% | 51.6 MiB/s | 158.5 KiB | 00m00s [ 60/149] libmount-0:2.40.4-7.fc42.x86_ 100% | 37.9 MiB/s | 155.1 KiB | 00m00s [ 61/149] libuuid-0:2.40.4-7.fc42.x86_6 100% | 8.2 MiB/s | 25.3 KiB | 00m00s [ 62/149] libsmartcols-0:2.40.4-7.fc42. 100% | 13.2 MiB/s | 81.2 KiB | 00m00s [ 63/149] util-linux-core-0:2.40.4-7.fc 100% | 103.4 MiB/s | 529.2 KiB | 00m00s [ 64/149] xz-libs-1:5.8.1-2.fc42.x86_64 100% | 22.1 MiB/s | 113.0 KiB | 00m00s [ 65/149] ncurses-base-0:6.5-5.20250125 100% | 17.2 MiB/s | 88.1 KiB | 00m00s [ 66/149] pcre2-syntax-0:10.45-1.fc42.n 100% | 39.5 MiB/s | 161.7 KiB | 00m00s [ 67/149] libzstd-0:1.5.7-1.fc42.x86_64 100% | 102.5 MiB/s | 314.8 KiB | 00m00s [ 68/149] lz4-libs-0:1.10.0-2.fc42.x86_ 100% | 25.4 MiB/s | 78.1 KiB | 00m00s [ 69/149] rpm-sequoia-0:1.7.0-5.fc42.x8 100% | 127.1 MiB/s | 911.1 KiB | 00m00s [ 70/149] gnulib-l10n-0:20241231-1.fc42 100% | 29.3 MiB/s | 150.1 KiB | 00m00s [ 71/149] fedora-gpg-keys-0:42-1.noarch 100% | 33.1 MiB/s | 135.6 KiB | 00m00s [ 72/149] add-determinism-0:0.6.0-1.fc4 100% | 128.1 MiB/s | 918.3 KiB | 00m00s [ 73/149] basesystem-0:11-22.fc42.noarc 100% | 2.4 MiB/s | 7.3 KiB | 00m00s [ 74/149] glibc-gconv-extra-0:2.41-11.f 100% | 136.9 MiB/s | 1.6 MiB | 00m00s [ 75/149] glibc-0:2.41-11.fc42.x86_64 100% | 140.3 MiB/s | 2.2 MiB | 00m00s [ 76/149] efi-srpm-macros-0:6-3.fc42.no 100% | 11.0 MiB/s | 22.5 KiB | 00m00s [ 77/149] dwz-0:0.16-1.fc42.x86_64 100% | 16.5 MiB/s | 135.5 KiB | 00m00s [ 78/149] file-0:5.46-3.fc42.x86_64 100% | 15.8 MiB/s | 48.6 KiB | 00m00s [ 79/149] filesystem-srpm-macros-0:3.18 100% | 8.5 MiB/s | 26.1 KiB | 00m00s [ 80/149] file-libs-0:5.46-3.fc42.x86_6 100% | 138.3 MiB/s | 849.5 KiB | 00m00s [ 81/149] fonts-srpm-macros-1:2.0.5-22. 100% | 8.9 MiB/s | 27.2 KiB | 00m00s [ 82/149] go-srpm-macros-0:3.8.0-1.fc42 100% | 5.5 MiB/s | 28.3 KiB | 00m00s [ 83/149] pyproject-srpm-macros-0:1.18. 100% | 6.7 MiB/s | 13.7 KiB | 00m00s [ 84/149] python-srpm-macros-0:3.13-5.f 100% | 7.3 MiB/s | 22.5 KiB | 00m00s [ 85/149] qt5-srpm-macros-0:5.15.17-1.f 100% | 4.3 MiB/s | 8.7 KiB | 00m00s [ 86/149] qt6-srpm-macros-0:6.9.2-1.fc4 100% | 3.1 MiB/s | 9.4 KiB | 00m00s [ 87/149] rust-srpm-macros-0:26.4-1.fc4 100% | 3.6 MiB/s | 11.2 KiB | 00m00s [ 88/149] libgcc-0:15.2.1-1.fc42.x86_64 100% | 32.1 MiB/s | 131.6 KiB | 00m00s [ 89/149] zlib-ng-compat-0:2.2.5-2.fc42 100% | 19.3 MiB/s | 79.2 KiB | 00m00s [ 90/149] filesystem-0:3.18-47.fc42.x86 100% | 166.7 MiB/s | 1.3 MiB | 00m00s [ 91/149] elfutils-libs-0:0.193-2.fc42. 100% | 44.0 MiB/s | 270.2 KiB | 00m00s [ 92/149] elfutils-libelf-0:0.193-2.fc4 100% | 29.0 MiB/s | 207.8 KiB | 00m00s [ 93/149] elfutils-0:0.193-2.fc42.x86_6 100% | 79.7 MiB/s | 571.4 KiB | 00m00s [ 94/149] elfutils-debuginfod-client-0: 100% | 9.2 MiB/s | 46.9 KiB | 00m00s [ 95/149] json-c-0:0.18-2.fc42.x86_64 100% | 11.0 MiB/s | 44.9 KiB | 00m00s [ 96/149] libselinux-0:3.8-3.fc42.x86_6 100% | 31.5 MiB/s | 96.7 KiB | 00m00s [ 97/149] libsepol-0:3.8-1.fc42.x86_64 100% | 85.2 MiB/s | 348.9 KiB | 00m00s [ 98/149] systemd-libs-0:257.9-2.fc42.x 100% | 158.3 MiB/s | 810.3 KiB | 00m00s [ 99/149] libxcrypt-0:4.4.38-7.fc42.x86 100% | 31.1 MiB/s | 127.2 KiB | 00m00s [100/149] libstdc++-0:15.2.1-1.fc42.x86 100% | 99.6 MiB/s | 917.8 KiB | 00m00s [101/149] audit-libs-0:4.1.1-1.fc42.x86 100% | 22.5 MiB/s | 138.5 KiB | 00m00s [102/149] pam-libs-0:1.7.0-6.fc42.x86_6 100% | 14.0 MiB/s | 57.5 KiB | 00m00s [103/149] libeconf-0:0.7.6-2.fc42.x86_6 100% | 11.4 MiB/s | 35.2 KiB | 00m00s [104/149] libsemanage-0:3.8.1-2.fc42.x8 100% | 40.1 MiB/s | 123.2 KiB | 00m00s [105/149] lua-libs-0:5.4.8-1.fc42.x86_6 100% | 32.2 MiB/s | 131.9 KiB | 00m00s [106/149] sqlite-libs-0:3.47.2-5.fc42.x 100% | 147.2 MiB/s | 753.8 KiB | 00m00s [107/149] openssl-libs-1:3.2.4-4.fc42.x 100% | 166.5 MiB/s | 2.3 MiB | 00m00s [108/149] libgomp-0:15.2.1-1.fc42.x86_6 100% | 40.3 MiB/s | 371.6 KiB | 00m00s [109/149] jansson-0:2.14-2.fc42.x86_64 100% | 11.2 MiB/s | 45.7 KiB | 00m00s [110/149] debugedit-0:5.1-7.fc42.x86_64 100% | 19.2 MiB/s | 78.8 KiB | 00m00s [111/149] libarchive-0:3.8.1-1.fc42.x86 100% | 68.6 MiB/s | 421.6 KiB | 00m00s [112/149] libxml2-0:2.12.10-1.fc42.x86_ 100% | 95.4 MiB/s | 683.7 KiB | 00m00s [113/149] pkgconf-pkg-config-0:2.3.0-2. 100% | 2.4 MiB/s | 9.9 KiB | 00m00s [114/149] binutils-0:2.44-6.fc42.x86_64 100% | 192.3 MiB/s | 5.8 MiB | 00m00s [115/149] pkgconf-0:2.3.0-2.fc42.x86_64 100% | 4.4 MiB/s | 44.9 KiB | 00m00s [116/149] pkgconf-m4-0:2.3.0-2.fc42.noa 100% | 1.7 MiB/s | 14.2 KiB | 00m00s [117/149] libpkgconf-0:2.3.0-2.fc42.x86 100% | 18.7 MiB/s | 38.4 KiB | 00m00s [118/149] curl-0:8.11.1-6.fc42.x86_64 100% | 71.6 MiB/s | 220.0 KiB | 00m00s [119/149] ca-certificates-0:2025.2.80_v 100% | 158.5 MiB/s | 973.5 KiB | 00m00s [120/149] crypto-policies-0:20250707-1. 100% | 18.7 MiB/s | 96.0 KiB | 00m00s [121/149] elfutils-default-yama-scope-0 100% | 2.5 MiB/s | 12.6 KiB | 00m00s [122/149] libffi-0:3.4.6-5.fc42.x86_64 100% | 9.7 MiB/s | 39.9 KiB | 00m00s [123/149] p11-kit-0:0.25.8-1.fc42.x86_6 100% | 98.3 MiB/s | 503.5 KiB | 00m00s [124/149] libtasn1-0:4.20.0-1.fc42.x86_ 100% | 18.3 MiB/s | 75.0 KiB | 00m00s [125/149] p11-kit-trust-0:0.25.8-1.fc42 100% | 34.0 MiB/s | 139.2 KiB | 00m00s [126/149] fedora-release-0:42-30.noarch 100% | 6.6 MiB/s | 13.5 KiB | 00m00s [127/149] alternatives-0:1.33-1.fc42.x8 100% | 13.2 MiB/s | 40.5 KiB | 00m00s [128/149] fedora-release-identity-basic 100% | 4.7 MiB/s | 14.3 KiB | 00m00s [129/149] libcurl-0:8.11.1-6.fc42.x86_6 100% | 60.5 MiB/s | 371.7 KiB | 00m00s [130/149] libbrotli-0:1.1.0-6.fc42.x86_ 100% | 55.3 MiB/s | 339.8 KiB | 00m00s [131/149] libidn2-0:2.3.8-1.fc42.x86_64 100% | 19.0 MiB/s | 174.8 KiB | 00m00s [132/149] libnghttp2-0:1.64.0-3.fc42.x8 100% | 15.2 MiB/s | 77.7 KiB | 00m00s [133/149] libpsl-0:0.21.5-5.fc42.x86_64 100% | 12.5 MiB/s | 64.0 KiB | 00m00s [134/149] libssh-config-0:0.11.3-1.fc42 100% | 8.9 MiB/s | 9.1 KiB | 00m00s [135/149] libssh-0:0.11.3-1.fc42.x86_64 100% | 75.9 MiB/s | 233.0 KiB | 00m00s [136/149] libunistring-0:1.1-9.fc42.x86 100% | 132.4 MiB/s | 542.5 KiB | 00m00s [137/149] xxhash-libs-0:0.8.3-2.fc42.x8 100% | 7.6 MiB/s | 39.1 KiB | 00m00s [138/149] systemd-standalone-sysusers-0 100% | 21.6 MiB/s | 154.8 KiB | 00m00s [139/149] publicsuffix-list-dafsa-0:202 100% | 8.3 MiB/s | 59.2 KiB | 00m00s [140/149] krb5-libs-0:1.21.3-6.fc42.x86 100% | 82.4 MiB/s | 759.8 KiB | 00m00s [141/149] keyutils-libs-0:1.6.3-5.fc42. 100% | 5.1 MiB/s | 31.5 KiB | 00m00s [142/149] libcom_err-0:1.47.2-3.fc42.x8 100% | 4.4 MiB/s | 26.9 KiB | 00m00s [143/149] libverto-0:0.3.2-10.fc42.x86_ 100% | 4.1 MiB/s | 20.8 KiB | 00m00s [144/149] gdb-minimal-0:16.3-1.fc42.x86 100% | 137.7 MiB/s | 4.4 MiB | 00m00s [145/149] openldap-0:2.6.10-1.fc42.x86_ 100% | 28.1 MiB/s | 258.6 KiB | 00m00s [146/149] cyrus-sasl-lib-0:2.1.28-30.fc 100% | 64.6 MiB/s | 793.5 KiB | 00m00s [147/149] libtool-ltdl-0:2.5.4-4.fc42.x 100% | 8.8 MiB/s | 36.2 KiB | 00m00s [148/149] libevent-0:2.1.12-15.fc42.x86 100% | 36.3 MiB/s | 260.2 KiB | 00m00s [149/149] gdbm-libs-1:1.23-9.fc42.x86_6 100% | 27.8 MiB/s | 57.0 KiB | 00m00s -------------------------------------------------------------------------------- [149/149] Total 100% | 162.4 MiB/s | 52.4 MiB | 00m00s Running transaction Importing OpenPGP key 0x105EF944: UserID : "Fedora (42) " Fingerprint: B0F4950458F69E1150C6C5EDC8AC4916105EF944 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-42-primary The key was successfully imported. [ 1/151] Verify package files 100% | 903.0 B/s | 149.0 B | 00m00s [ 2/151] Prepare transaction 100% | 4.4 KiB/s | 149.0 B | 00m00s [ 3/151] Installing libgcc-0:15.2.1-1. 100% | 261.9 MiB/s | 268.2 KiB | 00m00s [ 4/151] Installing publicsuffix-list- 100% | 0.0 B/s | 69.8 KiB | 00m00s [ 5/151] Installing libssh-config-0:0. 100% | 0.0 B/s | 816.0 B | 00m00s [ 6/151] Installing fedora-release-ide 100% | 0.0 B/s | 904.0 B | 00m00s [ 7/151] Installing fedora-gpg-keys-0: 100% | 56.9 MiB/s | 174.8 KiB | 00m00s [ 8/151] Installing fedora-repos-0:42- 100% | 0.0 B/s | 5.7 KiB | 00m00s [ 9/151] Installing fedora-release-com 100% | 23.9 MiB/s | 24.5 KiB | 00m00s [ 10/151] Installing fedora-release-0:4 100% | 10.1 KiB/s | 124.0 B | 00m00s >>> Running sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Scriptlet output: >>> Creating group 'adm' with GID 4. >>> Creating group 'audio' with GID 63. >>> Creating group 'bin' with GID 1. >>> Creating group 'cdrom' with GID 11. >>> Creating group 'clock' with GID 103. >>> Creating group 'daemon' with GID 2. >>> Creating group 'dialout' with GID 18. >>> Creating group 'disk' with GID 6. >>> Creating group 'floppy' with GID 19. >>> Creating group 'ftp' with GID 50. >>> Creating group 'games' with GID 20. >>> Creating group 'input' with GID 104. >>> Creating group 'kmem' with GID 9. >>> Creating group 'kvm' with GID 36. >>> Creating group 'lock' with GID 54. >>> Creating group 'lp' with GID 7. >>> Creating group 'mail' with GID 12. >>> Creating group 'man' with GID 15. >>> Creating group 'mem' with GID 8. >>> Creating group 'nobody' with GID 65534. >>> Creating group 'render' with GID 105. >>> Creating group 'root' with GID 0. >>> Creating group 'sgx' with GID 106. >>> Creating group 'sys' with GID 3. >>> Creating group 'tape' with GID 33. >>> Creating group 'tty' with GID 5. >>> Creating group 'users' with GID 100. >>> Creating group 'utmp' with GID 22. >>> Creating group 'video' with GID 39. >>> Creating group 'wheel' with GID 10. >>> >>> Running sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Scriptlet output: >>> Creating user 'adm' (adm) with UID 3 and GID 4. >>> Creating user 'bin' (bin) with UID 1 and GID 1. >>> Creating user 'daemon' (daemon) with UID 2 and GID 2. >>> Creating user 'ftp' (FTP User) with UID 14 and GID 50. >>> Creating user 'games' (games) with UID 12 and GID 20. >>> Creating user 'halt' (halt) with UID 7 and GID 0. >>> Creating user 'lp' (lp) with UID 4 and GID 7. >>> Creating user 'mail' (mail) with UID 8 and GID 12. >>> Creating user 'nobody' (Kernel Overflow User) with UID 65534 and GID 65534. >>> Creating user 'operator' (operator) with UID 11 and GID 0. >>> Creating user 'root' (Super User) with UID 0 and GID 0. >>> Creating user 'shutdown' (shutdown) with UID 6 and GID 0. >>> Creating user 'sync' (sync) with UID 5 and GID 0. >>> [ 11/151] Installing setup-0:2.15.0-13. 100% | 54.6 MiB/s | 726.7 KiB | 00m00s >>> [RPM] /etc/hosts created as /etc/hosts.rpmnew [ 12/151] Installing filesystem-0:3.18- 100% | 3.0 MiB/s | 212.8 KiB | 00m00s [ 13/151] Installing basesystem-0:11-22 100% | 0.0 B/s | 124.0 B | 00m00s [ 14/151] Installing pkgconf-m4-0:2.3.0 100% | 0.0 B/s | 14.8 KiB | 00m00s [ 15/151] Installing rust-srpm-macros-0 100% | 0.0 B/s | 5.6 KiB | 00m00s [ 16/151] Installing qt6-srpm-macros-0: 100% | 0.0 B/s | 740.0 B | 00m00s [ 17/151] Installing qt5-srpm-macros-0: 100% | 0.0 B/s | 776.0 B | 00m00s [ 18/151] Installing gnulib-l10n-0:2024 100% | 215.5 MiB/s | 661.9 KiB | 00m00s [ 19/151] Installing coreutils-common-0 100% | 429.0 MiB/s | 11.2 MiB | 00m00s [ 20/151] Installing pcre2-syntax-0:10. 100% | 269.9 MiB/s | 276.4 KiB | 00m00s [ 21/151] Installing ncurses-base-0:6.5 100% | 86.0 MiB/s | 352.2 KiB | 00m00s [ 22/151] Installing glibc-minimal-lang 100% | 0.0 B/s | 124.0 B | 00m00s [ 23/151] Installing ncurses-libs-0:6.5 100% | 232.6 MiB/s | 952.8 KiB | 00m00s [ 24/151] Installing glibc-0:2.41-11.fc 100% | 221.7 MiB/s | 6.7 MiB | 00m00s [ 25/151] Installing bash-0:5.2.37-1.fc 100% | 302.6 MiB/s | 8.2 MiB | 00m00s [ 26/151] Installing glibc-common-0:2.4 100% | 63.8 MiB/s | 1.0 MiB | 00m00s [ 27/151] Installing glibc-gconv-extra- 100% | 304.5 MiB/s | 7.3 MiB | 00m00s [ 28/151] Installing zlib-ng-compat-0:2 100% | 0.0 B/s | 138.4 KiB | 00m00s [ 29/151] Installing bzip2-libs-0:1.0.8 100% | 0.0 B/s | 85.7 KiB | 00m00s [ 30/151] Installing xz-libs-1:5.8.1-2. 100% | 213.8 MiB/s | 218.9 KiB | 00m00s [ 31/151] Installing libuuid-0:2.40.4-7 100% | 0.0 B/s | 38.4 KiB | 00m00s [ 32/151] Installing libblkid-0:2.40.4- 100% | 257.4 MiB/s | 263.5 KiB | 00m00s [ 33/151] Installing popt-0:1.19-8.fc42 100% | 68.1 MiB/s | 139.4 KiB | 00m00s [ 34/151] Installing readline-0:8.2-13. 100% | 475.7 MiB/s | 487.1 KiB | 00m00s [ 35/151] Installing gmp-1:6.3.0-4.fc42 100% | 397.2 MiB/s | 813.5 KiB | 00m00s [ 36/151] Installing libzstd-0:1.5.7-1. 100% | 395.1 MiB/s | 809.1 KiB | 00m00s [ 37/151] Installing elfutils-libelf-0: 100% | 388.8 MiB/s | 1.2 MiB | 00m00s [ 38/151] Installing libstdc++-0:15.2.1 100% | 405.2 MiB/s | 2.8 MiB | 00m00s [ 39/151] Installing libxcrypt-0:4.4.38 100% | 280.4 MiB/s | 287.2 KiB | 00m00s [ 40/151] Installing libattr-0:2.5.2-5. 100% | 0.0 B/s | 28.1 KiB | 00m00s [ 41/151] Installing libacl-0:2.3.2-3.f 100% | 0.0 B/s | 39.2 KiB | 00m00s [ 42/151] Installing dwz-0:0.16-1.fc42. 100% | 23.5 MiB/s | 288.5 KiB | 00m00s [ 43/151] Installing mpfr-0:4.2.2-1.fc4 100% | 405.5 MiB/s | 830.4 KiB | 00m00s [ 44/151] Installing gawk-0:5.3.1-1.fc4 100% | 105.9 MiB/s | 1.7 MiB | 00m00s [ 45/151] Installing unzip-0:6.0-66.fc4 100% | 32.0 MiB/s | 393.8 KiB | 00m00s [ 46/151] Installing file-libs-0:5.46-3 100% | 790.5 MiB/s | 11.9 MiB | 00m00s [ 47/151] Installing file-0:5.46-3.fc42 100% | 5.2 MiB/s | 101.7 KiB | 00m00s [ 48/151] Installing crypto-policies-0: 100% | 41.0 MiB/s | 167.8 KiB | 00m00s [ 49/151] Installing pcre2-0:10.45-1.fc 100% | 341.4 MiB/s | 699.1 KiB | 00m00s [ 50/151] Installing grep-0:3.11-10.fc4 100% | 62.7 MiB/s | 1.0 MiB | 00m00s [ 51/151] Installing xz-1:5.8.1-2.fc42. 100% | 83.2 MiB/s | 1.3 MiB | 00m00s [ 52/151] Installing libcap-ng-0:0.8.5- 100% | 0.0 B/s | 74.8 KiB | 00m00s [ 53/151] Installing audit-libs-0:4.1.1 100% | 372.6 MiB/s | 381.5 KiB | 00m00s [ 54/151] Installing libsmartcols-0:2.4 100% | 177.3 MiB/s | 181.5 KiB | 00m00s [ 55/151] Installing lz4-libs-0:1.10.0- 100% | 154.7 MiB/s | 158.5 KiB | 00m00s [ 56/151] Installing libsepol-0:3.8-1.f 100% | 403.8 MiB/s | 827.0 KiB | 00m00s [ 57/151] Installing libselinux-0:3.8-3 100% | 189.8 MiB/s | 194.3 KiB | 00m00s [ 58/151] Installing findutils-1:4.10.0 100% | 110.2 MiB/s | 1.9 MiB | 00m00s [ 59/151] Installing sed-0:4.9-4.fc42.x 100% | 56.3 MiB/s | 865.5 KiB | 00m00s [ 60/151] Installing libmount-0:2.40.4- 100% | 348.9 MiB/s | 357.3 KiB | 00m00s [ 61/151] Installing libeconf-0:0.7.6-2 100% | 64.7 MiB/s | 66.2 KiB | 00m00s [ 62/151] Installing pam-libs-0:1.7.0-6 100% | 126.1 MiB/s | 129.1 KiB | 00m00s [ 63/151] Installing libcap-0:2.73-2.fc 100% | 17.3 MiB/s | 212.1 KiB | 00m00s [ 64/151] Installing systemd-libs-0:257 100% | 372.0 MiB/s | 2.2 MiB | 00m00s [ 65/151] Installing lua-libs-0:5.4.8-1 100% | 275.4 MiB/s | 282.0 KiB | 00m00s [ 66/151] Installing libffi-0:3.4.6-5.f 100% | 81.7 MiB/s | 83.7 KiB | 00m00s [ 67/151] Installing libtasn1-0:4.20.0- 100% | 173.9 MiB/s | 178.1 KiB | 00m00s [ 68/151] Installing p11-kit-0:0.25.8-1 100% | 114.5 MiB/s | 2.3 MiB | 00m00s [ 69/151] Installing alternatives-0:1.3 100% | 4.8 MiB/s | 63.8 KiB | 00m00s [ 70/151] Installing libunistring-0:1.1 100% | 345.3 MiB/s | 1.7 MiB | 00m00s [ 71/151] Installing libidn2-0:2.3.8-1. 100% | 183.2 MiB/s | 562.7 KiB | 00m00s [ 72/151] Installing libpsl-0:0.21.5-5. 100% | 0.0 B/s | 77.5 KiB | 00m00s [ 73/151] Installing p11-kit-trust-0:0. 100% | 20.8 MiB/s | 448.3 KiB | 00m00s [ 74/151] Installing openssl-libs-1:3.2 100% | 411.5 MiB/s | 7.8 MiB | 00m00s [ 75/151] Installing coreutils-0:9.6-6. 100% | 175.9 MiB/s | 5.5 MiB | 00m00s [ 76/151] Installing ca-certificates-0: 100% | 2.2 MiB/s | 2.5 MiB | 00m01s [ 77/151] Installing gzip-0:1.13-3.fc42 100% | 27.8 MiB/s | 398.4 KiB | 00m00s [ 78/151] Installing rpm-sequoia-0:1.7. 100% | 402.4 MiB/s | 2.4 MiB | 00m00s [ 79/151] Installing libevent-0:2.1.12- 100% | 295.2 MiB/s | 906.9 KiB | 00m00s [ 80/151] Installing util-linux-core-0: 100% | 83.9 MiB/s | 1.4 MiB | 00m00s [ 81/151] Installing systemd-standalone 100% | 22.6 MiB/s | 277.8 KiB | 00m00s [ 82/151] Installing tar-2:1.35-5.fc42. 100% | 164.6 MiB/s | 3.0 MiB | 00m00s [ 83/151] Installing libsemanage-0:3.8. 100% | 149.5 MiB/s | 306.2 KiB | 00m00s [ 84/151] Installing shadow-utils-2:4.1 100% | 149.7 MiB/s | 4.0 MiB | 00m00s [ 85/151] Installing zstd-0:1.5.7-1.fc4 100% | 114.0 MiB/s | 1.7 MiB | 00m00s [ 86/151] Installing zip-0:3.0-43.fc42. 100% | 52.8 MiB/s | 702.4 KiB | 00m00s [ 87/151] Installing libfdisk-0:2.40.4- 100% | 364.7 MiB/s | 373.4 KiB | 00m00s [ 88/151] Installing libxml2-0:2.12.10- 100% | 113.1 MiB/s | 1.7 MiB | 00m00s [ 89/151] Installing libarchive-0:3.8.1 100% | 311.6 MiB/s | 957.1 KiB | 00m00s [ 90/151] Installing bzip2-0:1.0.8-20.f 100% | 8.5 MiB/s | 103.8 KiB | 00m00s [ 91/151] Installing add-determinism-0: 100% | 145.1 MiB/s | 2.5 MiB | 00m00s [ 92/151] Installing build-reproducibil 100% | 0.0 B/s | 1.0 KiB | 00m00s [ 93/151] Installing sqlite-libs-0:3.47 100% | 378.1 MiB/s | 1.5 MiB | 00m00s [ 94/151] Installing rpm-libs-0:4.20.1- 100% | 353.2 MiB/s | 723.4 KiB | 00m00s [ 95/151] Installing ed-0:1.21-2.fc42.x 100% | 12.1 MiB/s | 148.8 KiB | 00m00s [ 96/151] Installing patch-0:2.8-1.fc42 100% | 18.3 MiB/s | 224.3 KiB | 00m00s [ 97/151] Installing filesystem-srpm-ma 100% | 0.0 B/s | 38.9 KiB | 00m00s [ 98/151] Installing elfutils-default-y 100% | 408.6 KiB/s | 2.0 KiB | 00m00s [ 99/151] Installing elfutils-libs-0:0. 100% | 334.6 MiB/s | 685.2 KiB | 00m00s [100/151] Installing cpio-0:2.15-4.fc42 100% | 68.7 MiB/s | 1.1 MiB | 00m00s [101/151] Installing diffutils-0:3.12-1 100% | 97.6 MiB/s | 1.6 MiB | 00m00s [102/151] Installing json-c-0:0.18-2.fc 100% | 85.9 MiB/s | 88.0 KiB | 00m00s [103/151] Installing libgomp-0:15.2.1-1 100% | 529.8 MiB/s | 542.5 KiB | 00m00s [104/151] Installing rpm-build-libs-0:4 100% | 202.5 MiB/s | 207.4 KiB | 00m00s [105/151] Installing jansson-0:2.14-2.f 100% | 92.2 MiB/s | 94.4 KiB | 00m00s [106/151] Installing libpkgconf-0:2.3.0 100% | 0.0 B/s | 79.2 KiB | 00m00s [107/151] Installing pkgconf-0:2.3.0-2. 100% | 8.1 MiB/s | 91.0 KiB | 00m00s [108/151] Installing pkgconf-pkg-config 100% | 161.2 KiB/s | 1.8 KiB | 00m00s [109/151] Installing libbrotli-0:1.1.0- 100% | 274.6 MiB/s | 843.6 KiB | 00m00s [110/151] Installing libnghttp2-0:1.64. 100% | 167.5 MiB/s | 171.5 KiB | 00m00s [111/151] Installing xxhash-libs-0:0.8. 100% | 0.0 B/s | 91.6 KiB | 00m00s [112/151] Installing keyutils-libs-0:1. 100% | 58.3 MiB/s | 59.7 KiB | 00m00s [113/151] Installing libcom_err-0:1.47. 100% | 0.0 B/s | 68.2 KiB | 00m00s [114/151] Installing libverto-0:0.3.2-1 100% | 26.6 MiB/s | 27.2 KiB | 00m00s [115/151] Installing krb5-libs-0:1.21.3 100% | 327.4 MiB/s | 2.3 MiB | 00m00s [116/151] Installing libssh-0:0.11.3-1. 100% | 277.9 MiB/s | 569.2 KiB | 00m00s [117/151] Installing libtool-ltdl-0:2.5 100% | 0.0 B/s | 71.2 KiB | 00m00s [118/151] Installing gdbm-libs-1:1.23-9 100% | 128.5 MiB/s | 131.6 KiB | 00m00s [119/151] Installing cyrus-sasl-lib-0:2 100% | 135.5 MiB/s | 2.3 MiB | 00m00s [120/151] Installing openldap-0:2.6.10- 100% | 322.1 MiB/s | 659.6 KiB | 00m00s [121/151] Installing libcurl-0:8.11.1-6 100% | 407.8 MiB/s | 835.2 KiB | 00m00s [122/151] Installing elfutils-debuginfo 100% | 7.0 MiB/s | 86.2 KiB | 00m00s [123/151] Installing elfutils-0:0.193-2 100% | 162.3 MiB/s | 2.9 MiB | 00m00s [124/151] Installing binutils-0:2.44-6. 100% | 349.2 MiB/s | 25.8 MiB | 00m00s [125/151] Installing gdb-minimal-0:16.3 100% | 308.1 MiB/s | 13.2 MiB | 00m00s [126/151] Installing debugedit-0:5.1-7. 100% | 15.9 MiB/s | 195.4 KiB | 00m00s [127/151] Installing curl-0:8.11.1-6.fc 100% | 21.1 MiB/s | 453.1 KiB | 00m00s [128/151] Installing rpm-0:4.20.1-1.fc4 100% | 104.1 MiB/s | 2.5 MiB | 00m00s [129/151] Installing lua-srpm-macros-0: 100% | 0.0 B/s | 1.9 KiB | 00m00s [130/151] Installing tree-sitter-srpm-m 100% | 0.0 B/s | 7.4 KiB | 00m00s [131/151] Installing zig-srpm-macros-0: 100% | 0.0 B/s | 1.7 KiB | 00m00s [132/151] Installing efi-srpm-macros-0: 100% | 0.0 B/s | 41.1 KiB | 00m00s [133/151] Installing perl-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [134/151] Installing package-notes-srpm 100% | 0.0 B/s | 2.0 KiB | 00m00s [135/151] Installing openblas-srpm-macr 100% | 0.0 B/s | 392.0 B | 00m00s [136/151] Installing ocaml-srpm-macros- 100% | 0.0 B/s | 2.2 KiB | 00m00s [137/151] Installing kernel-srpm-macros 100% | 0.0 B/s | 2.3 KiB | 00m00s [138/151] Installing gnat-srpm-macros-0 100% | 0.0 B/s | 1.3 KiB | 00m00s [139/151] Installing ghc-srpm-macros-0: 100% | 0.0 B/s | 1.0 KiB | 00m00s [140/151] Installing fpc-srpm-macros-0: 100% | 0.0 B/s | 420.0 B | 00m00s [141/151] Installing ansible-srpm-macro 100% | 0.0 B/s | 36.2 KiB | 00m00s [142/151] Installing forge-srpm-macros- 100% | 0.0 B/s | 40.3 KiB | 00m00s [143/151] Installing fonts-srpm-macros- 100% | 0.0 B/s | 57.0 KiB | 00m00s [144/151] Installing go-srpm-macros-0:3 100% | 0.0 B/s | 63.0 KiB | 00m00s [145/151] Installing python-srpm-macros 100% | 0.0 B/s | 52.2 KiB | 00m00s [146/151] Installing redhat-rpm-config- 100% | 93.9 MiB/s | 192.2 KiB | 00m00s [147/151] Installing rpm-build-0:4.20.1 100% | 13.3 MiB/s | 177.4 KiB | 00m00s [148/151] Installing pyproject-srpm-mac 100% | 2.4 MiB/s | 2.5 KiB | 00m00s [149/151] Installing util-linux-0:2.40. 100% | 104.9 MiB/s | 3.5 MiB | 00m00s [150/151] Installing which-0:2.23-2.fc4 100% | 7.0 MiB/s | 85.7 KiB | 00m00s [151/151] Installing info-0:7.2-3.fc42. 100% | 241.7 KiB/s | 358.3 KiB | 00m01s Complete! Finish: installing minimal buildroot with dnf5 Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: INFO: add-determinism-0.6.0-1.fc42.x86_64 alternatives-1.33-1.fc42.x86_64 ansible-srpm-macros-1-17.1.fc42.noarch audit-libs-4.1.1-1.fc42.x86_64 basesystem-11-22.fc42.noarch bash-5.2.37-1.fc42.x86_64 binutils-2.44-6.fc42.x86_64 build-reproducibility-srpm-macros-0.6.0-1.fc42.noarch bzip2-1.0.8-20.fc42.x86_64 bzip2-libs-1.0.8-20.fc42.x86_64 ca-certificates-2025.2.80_v9.0.304-1.0.fc42.noarch coreutils-9.6-6.fc42.x86_64 coreutils-common-9.6-6.fc42.x86_64 cpio-2.15-4.fc42.x86_64 crypto-policies-20250707-1.gitad370a8.fc42.noarch curl-8.11.1-6.fc42.x86_64 cyrus-sasl-lib-2.1.28-30.fc42.x86_64 debugedit-5.1-7.fc42.x86_64 diffutils-3.12-1.fc42.x86_64 dwz-0.16-1.fc42.x86_64 ed-1.21-2.fc42.x86_64 efi-srpm-macros-6-3.fc42.noarch elfutils-0.193-2.fc42.x86_64 elfutils-debuginfod-client-0.193-2.fc42.x86_64 elfutils-default-yama-scope-0.193-2.fc42.noarch elfutils-libelf-0.193-2.fc42.x86_64 elfutils-libs-0.193-2.fc42.x86_64 fedora-gpg-keys-42-1.noarch fedora-release-42-30.noarch fedora-release-common-42-30.noarch fedora-release-identity-basic-42-30.noarch fedora-repos-42-1.noarch file-5.46-3.fc42.x86_64 file-libs-5.46-3.fc42.x86_64 filesystem-3.18-47.fc42.x86_64 filesystem-srpm-macros-3.18-47.fc42.noarch findutils-4.10.0-5.fc42.x86_64 fonts-srpm-macros-2.0.5-22.fc42.noarch forge-srpm-macros-0.4.0-2.fc42.noarch fpc-srpm-macros-1.3-14.fc42.noarch gawk-5.3.1-1.fc42.x86_64 gdb-minimal-16.3-1.fc42.x86_64 gdbm-libs-1.23-9.fc42.x86_64 ghc-srpm-macros-1.9.2-2.fc42.noarch glibc-2.41-11.fc42.x86_64 glibc-common-2.41-11.fc42.x86_64 glibc-gconv-extra-2.41-11.fc42.x86_64 glibc-minimal-langpack-2.41-11.fc42.x86_64 gmp-6.3.0-4.fc42.x86_64 gnat-srpm-macros-6-7.fc42.noarch gnulib-l10n-20241231-1.fc42.noarch go-srpm-macros-3.8.0-1.fc42.noarch gpg-pubkey-105ef944-65ca83d1 grep-3.11-10.fc42.x86_64 gzip-1.13-3.fc42.x86_64 info-7.2-3.fc42.x86_64 jansson-2.14-2.fc42.x86_64 json-c-0.18-2.fc42.x86_64 kernel-srpm-macros-1.0-25.fc42.noarch keyutils-libs-1.6.3-5.fc42.x86_64 krb5-libs-1.21.3-6.fc42.x86_64 libacl-2.3.2-3.fc42.x86_64 libarchive-3.8.1-1.fc42.x86_64 libattr-2.5.2-5.fc42.x86_64 libblkid-2.40.4-7.fc42.x86_64 libbrotli-1.1.0-6.fc42.x86_64 libcap-2.73-2.fc42.x86_64 libcap-ng-0.8.5-4.fc42.x86_64 libcom_err-1.47.2-3.fc42.x86_64 libcurl-8.11.1-6.fc42.x86_64 libeconf-0.7.6-2.fc42.x86_64 libevent-2.1.12-15.fc42.x86_64 libfdisk-2.40.4-7.fc42.x86_64 libffi-3.4.6-5.fc42.x86_64 libgcc-15.2.1-1.fc42.x86_64 libgomp-15.2.1-1.fc42.x86_64 libidn2-2.3.8-1.fc42.x86_64 libmount-2.40.4-7.fc42.x86_64 libnghttp2-1.64.0-3.fc42.x86_64 libpkgconf-2.3.0-2.fc42.x86_64 libpsl-0.21.5-5.fc42.x86_64 libselinux-3.8-3.fc42.x86_64 libsemanage-3.8.1-2.fc42.x86_64 libsepol-3.8-1.fc42.x86_64 libsmartcols-2.40.4-7.fc42.x86_64 libssh-0.11.3-1.fc42.x86_64 libssh-config-0.11.3-1.fc42.noarch libstdc++-15.2.1-1.fc42.x86_64 libtasn1-4.20.0-1.fc42.x86_64 libtool-ltdl-2.5.4-4.fc42.x86_64 libunistring-1.1-9.fc42.x86_64 libuuid-2.40.4-7.fc42.x86_64 libverto-0.3.2-10.fc42.x86_64 libxcrypt-4.4.38-7.fc42.x86_64 libxml2-2.12.10-1.fc42.x86_64 libzstd-1.5.7-1.fc42.x86_64 lua-libs-5.4.8-1.fc42.x86_64 lua-srpm-macros-1-15.fc42.noarch lz4-libs-1.10.0-2.fc42.x86_64 mpfr-4.2.2-1.fc42.x86_64 ncurses-base-6.5-5.20250125.fc42.noarch ncurses-libs-6.5-5.20250125.fc42.x86_64 ocaml-srpm-macros-10-4.fc42.noarch openblas-srpm-macros-2-19.fc42.noarch openldap-2.6.10-1.fc42.x86_64 openssl-libs-3.2.4-4.fc42.x86_64 p11-kit-0.25.8-1.fc42.x86_64 p11-kit-trust-0.25.8-1.fc42.x86_64 package-notes-srpm-macros-0.5-13.fc42.noarch pam-libs-1.7.0-6.fc42.x86_64 patch-2.8-1.fc42.x86_64 pcre2-10.45-1.fc42.x86_64 pcre2-syntax-10.45-1.fc42.noarch perl-srpm-macros-1-57.fc42.noarch pkgconf-2.3.0-2.fc42.x86_64 pkgconf-m4-2.3.0-2.fc42.noarch pkgconf-pkg-config-2.3.0-2.fc42.x86_64 popt-1.19-8.fc42.x86_64 publicsuffix-list-dafsa-20250616-1.fc42.noarch pyproject-srpm-macros-1.18.4-1.fc42.noarch python-srpm-macros-3.13-5.fc42.noarch qt5-srpm-macros-5.15.17-1.fc42.noarch qt6-srpm-macros-6.9.2-1.fc42.noarch readline-8.2-13.fc42.x86_64 redhat-rpm-config-342-4.fc42.noarch rpm-4.20.1-1.fc42.x86_64 rpm-build-4.20.1-1.fc42.x86_64 rpm-build-libs-4.20.1-1.fc42.x86_64 rpm-libs-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 rust-srpm-macros-26.4-1.fc42.noarch sed-4.9-4.fc42.x86_64 setup-2.15.0-13.fc42.noarch shadow-utils-4.17.4-1.fc42.x86_64 sqlite-libs-3.47.2-5.fc42.x86_64 systemd-libs-257.9-2.fc42.x86_64 systemd-standalone-sysusers-257.9-2.fc42.x86_64 tar-1.35-5.fc42.x86_64 tree-sitter-srpm-macros-0.1.0-8.fc42.noarch unzip-6.0-66.fc42.x86_64 util-linux-2.40.4-7.fc42.x86_64 util-linux-core-2.40.4-7.fc42.x86_64 which-2.23-2.fc42.x86_64 xxhash-libs-0.8.3-2.fc42.x86_64 xz-5.8.1-2.fc42.x86_64 xz-libs-5.8.1-2.fc42.x86_64 zig-srpm-macros-1-4.fc42.noarch zip-3.0-43.fc42.x86_64 zlib-ng-compat-2.2.5-2.fc42.x86_64 zstd-1.5.7-1.fc42.x86_64 Start: buildsrpm Start: rpmbuild -bs Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc42.src.rpm Finish: rpmbuild -bs INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-42-x86_64-1759552642.867825/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names Finish: buildsrpm INFO: Done(/var/lib/copr-rpmbuild/workspace/workdir-t52wt98g/ollama/ollama.spec) Config(child) 0 minutes 18 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot INFO: Start(/var/lib/copr-rpmbuild/results/ollama-0.12.3-1.fc42.src.rpm) Config(fedora-42-x86_64) Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1759552642.867825/root. INFO: reusing tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1759552642.867825/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-1759552642.867825/root. INFO: calling preinit hooks INFO: enabled root cache Start: unpacking root cache Finish: unpacking root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 dnf5-5.2.16.0-1.fc42.x86_64 dnf5-plugins-5.2.16.0-1.fc42.x86_64 Finish: chroot init Start: build phase for ollama-0.12.3-1.fc42.src.rpm Start: build setup for ollama-0.12.3-1.fc42.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc42.src.rpm Updating and loading repositories: Additional repo https_developer_downlo 100% | 19.7 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 19.7 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 7.5 KiB/s | 1.5 KiB | 00m00s fedora 100% | 45.9 KiB/s | 30.9 KiB | 00m01s updates 100% | 17.9 KiB/s | 7.3 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: cmake x86_64 3.31.6-2.fc42 fedora 34.2 MiB gcc-c++ x86_64 15.2.1-1.fc42 updates 41.3 MiB go-rpm-macros x86_64 3.8.0-1.fc42 updates 96.6 KiB go-vendor-tools noarch 0.8.0-1.fc42 updates 318.5 KiB hipblas-devel x86_64 6.3.0-4.fc42 fedora 3.1 MiB rocblas-devel x86_64 6.3.0-4.fc42 fedora 2.8 MiB rocm-comgr-devel x86_64 18-37.rocm6.3.1.fc42 fedora 103.0 KiB rocm-hip-devel x86_64 6.3.1-4.fc42 updates 2.7 MiB rocm-runtime-devel x86_64 6.3.1-4.fc42 fedora 565.6 KiB rocminfo x86_64 6.3.0-2.fc42 fedora 77.3 KiB systemd-rpm-macros noarch 257.9-2.fc42 updates 10.7 KiB Installing dependencies: Lmod x86_64 8.7.58-1.fc42 fedora 1.3 MiB annobin-docs noarch 12.94-1.fc42 updates 98.9 KiB annobin-plugin-gcc x86_64 12.94-1.fc42 updates 993.5 KiB cmake-data noarch 3.31.6-2.fc42 fedora 8.5 MiB cmake-filesystem x86_64 3.31.6-2.fc42 fedora 0.0 B cmake-rpm-macros noarch 3.31.6-2.fc42 fedora 7.7 KiB cpp x86_64 15.2.1-1.fc42 updates 37.9 MiB emacs-filesystem noarch 1:30.0-4.fc42 fedora 0.0 B expat x86_64 2.7.2-1.fc42 updates 298.6 KiB gcc x86_64 15.2.1-1.fc42 updates 111.2 MiB gcc-plugin-annobin x86_64 15.2.1-1.fc42 updates 57.1 KiB git x86_64 2.51.0-2.fc42 updates 56.4 KiB git-core x86_64 2.51.0-2.fc42 updates 23.6 MiB git-core-doc noarch 2.51.0-2.fc42 updates 17.7 MiB glibc-devel x86_64 2.41-11.fc42 updates 2.3 MiB go-filesystem x86_64 3.8.0-1.fc42 updates 0.0 B golang x86_64 1.24.7-1.fc42 updates 8.9 MiB golang-bin x86_64 1.24.7-1.fc42 updates 121.6 MiB golang-src noarch 1.24.7-1.fc42 updates 79.2 MiB golist x86_64 0.10.4-6.fc42 fedora 4.4 MiB groff-base x86_64 1.23.0-8.fc42 fedora 3.9 MiB hipblas x86_64 6.3.0-4.fc42 fedora 1.1 MiB hipblas-common-devel noarch 6.3.0-2.fc42 fedora 16.4 KiB hipcc x86_64 18-37.rocm6.3.1.fc42 fedora 761.6 KiB hwdata noarch 0.399-1.fc42 updates 9.6 MiB jsoncpp x86_64 1.9.6-1.fc42 fedora 261.6 KiB kernel-headers x86_64 6.16.2-200.fc42 updates 6.7 MiB kmod x86_64 33-3.fc42 fedora 235.4 KiB less x86_64 679-1.fc42 updates 406.1 KiB libb2 x86_64 0.98.1-13.fc42 fedora 46.1 KiB libcbor x86_64 0.11.0-3.fc42 fedora 77.8 KiB libdrm x86_64 2.4.125-1.fc42 updates 395.8 KiB libedit x86_64 3.1-55.20250104cvs.fc42 fedora 244.1 KiB libfido2 x86_64 1.15.0-3.fc42 fedora 242.1 KiB libmpc x86_64 1.3.1-7.fc42 fedora 164.5 KiB libpciaccess x86_64 0.16-15.fc42 fedora 44.5 KiB libstdc++-devel x86_64 15.2.1-1.fc42 updates 16.1 MiB libtommath x86_64 1.3.1~rc1-5.fc42 fedora 130.4 KiB libuv x86_64 1:1.51.0-1.fc42 updates 570.2 KiB libxcrypt-devel x86_64 4.4.38-7.fc42 updates 30.8 KiB lua x86_64 5.4.8-1.fc42 updates 602.2 KiB lua-filesystem x86_64 1.8.0-14.fc42 fedora 64.8 KiB lua-json noarch 1.3.4-9.fc42 fedora 59.4 KiB lua-lpeg x86_64 1.1.0-5.fc42 fedora 181.4 KiB lua-posix x86_64 36.2.1-8.fc42 fedora 553.0 KiB lua-term x86_64 0.08-5.fc42 fedora 22.8 KiB make x86_64 1:4.4.1-10.fc42 fedora 1.8 MiB mpdecimal x86_64 4.0.1-1.fc42 updates 217.2 KiB ncurses x86_64 6.5-5.20250125.fc42 fedora 608.1 KiB numactl-libs x86_64 2.0.19-2.fc42 fedora 52.9 KiB openssh x86_64 9.9p1-11.fc42 updates 1.4 MiB openssh-clients x86_64 9.9p1-11.fc42 updates 2.7 MiB perl-AutoLoader noarch 5.74-519.fc42 updates 20.5 KiB perl-B x86_64 1.89-519.fc42 updates 498.0 KiB perl-Carp noarch 1.54-512.fc42 fedora 46.6 KiB perl-Class-Struct noarch 0.68-519.fc42 updates 25.4 KiB perl-Data-Dumper x86_64 2.189-513.fc42 fedora 115.6 KiB perl-Digest noarch 1.20-512.fc42 fedora 35.3 KiB perl-Digest-MD5 x86_64 2.59-6.fc42 fedora 59.7 KiB perl-DynaLoader x86_64 1.56-519.fc42 updates 32.1 KiB perl-Encode x86_64 4:3.21-512.fc42 fedora 4.7 MiB perl-Errno x86_64 1.38-519.fc42 updates 8.3 KiB perl-Error noarch 1:0.17030-1.fc42 fedora 76.7 KiB perl-Exporter noarch 5.78-512.fc42 fedora 54.3 KiB perl-Fcntl x86_64 1.18-519.fc42 updates 48.9 KiB perl-File-Basename noarch 2.86-519.fc42 updates 14.0 KiB perl-File-Copy noarch 2.41-519.fc42 updates 19.6 KiB perl-File-Path noarch 2.18-512.fc42 fedora 63.5 KiB perl-File-Temp noarch 1:0.231.100-512.fc42 fedora 162.3 KiB perl-File-Which noarch 1.27-13.fc42 fedora 30.4 KiB perl-File-stat noarch 1.14-519.fc42 updates 12.5 KiB perl-FileHandle noarch 2.05-519.fc42 updates 9.3 KiB perl-Getopt-Long noarch 1:2.58-3.fc42 fedora 144.5 KiB perl-Getopt-Std noarch 1.14-519.fc42 updates 11.2 KiB perl-Git noarch 2.51.0-2.fc42 updates 64.4 KiB perl-HTTP-Tiny noarch 0.090-2.fc42 fedora 154.4 KiB perl-IO x86_64 1.55-519.fc42 updates 147.0 KiB perl-IO-Socket-IP noarch 0.43-2.fc42 fedora 100.3 KiB perl-IO-Socket-SSL noarch 2.089-2.fc42 fedora 703.3 KiB perl-IPC-Open3 noarch 1.22-519.fc42 updates 22.5 KiB perl-MIME-Base32 noarch 1.303-23.fc42 fedora 30.7 KiB perl-MIME-Base64 x86_64 3.16-512.fc42 fedora 42.0 KiB perl-Net-SSLeay x86_64 1.94-8.fc42 fedora 1.3 MiB perl-POSIX x86_64 2.20-519.fc42 updates 231.0 KiB perl-PathTools x86_64 3.91-513.fc42 fedora 180.0 KiB perl-Pod-Escapes noarch 1:1.07-512.fc42 fedora 24.9 KiB perl-Pod-Perldoc noarch 3.28.01-513.fc42 fedora 163.7 KiB perl-Pod-Simple noarch 1:3.45-512.fc42 fedora 560.8 KiB perl-Pod-Usage noarch 4:2.05-1.fc42 fedora 86.3 KiB perl-Scalar-List-Utils x86_64 5:1.70-1.fc42 updates 144.9 KiB perl-SelectSaver noarch 1.02-519.fc42 updates 2.2 KiB perl-Socket x86_64 4:2.038-512.fc42 fedora 119.9 KiB perl-Storable x86_64 1:3.32-512.fc42 fedora 232.3 KiB perl-Symbol noarch 1.09-519.fc42 updates 6.8 KiB perl-Term-ANSIColor noarch 5.01-513.fc42 fedora 97.5 KiB perl-Term-Cap noarch 1.18-512.fc42 fedora 29.3 KiB perl-TermReadKey x86_64 2.38-24.fc42 fedora 64.0 KiB perl-Text-ParseWords noarch 3.31-512.fc42 fedora 13.6 KiB perl-Text-Tabs+Wrap noarch 2024.001-512.fc42 fedora 22.6 KiB perl-Time-Local noarch 2:1.350-512.fc42 fedora 68.9 KiB perl-URI noarch 5.31-2.fc42 fedora 257.0 KiB perl-base noarch 2.27-519.fc42 updates 12.5 KiB perl-constant noarch 1.33-513.fc42 fedora 26.2 KiB perl-if noarch 0.61.000-519.fc42 updates 5.8 KiB perl-interpreter x86_64 4:5.40.3-519.fc42 updates 118.4 KiB perl-lib x86_64 0.65-519.fc42 updates 8.5 KiB perl-libnet noarch 3.15-513.fc42 fedora 289.4 KiB perl-libs x86_64 4:5.40.3-519.fc42 updates 9.8 MiB perl-locale noarch 1.12-519.fc42 updates 6.5 KiB perl-mro x86_64 1.29-519.fc42 updates 41.5 KiB perl-overload noarch 1.37-519.fc42 updates 71.5 KiB perl-overloading noarch 0.02-519.fc42 updates 4.8 KiB perl-parent noarch 1:0.244-2.fc42 fedora 10.3 KiB perl-podlators noarch 1:6.0.2-3.fc42 fedora 317.5 KiB perl-vars noarch 1.05-519.fc42 updates 3.9 KiB procps-ng x86_64 4.0.4-6.fc42 fedora 1.0 MiB python-pip-wheel noarch 24.3.1-5.fc42 updates 1.2 MiB python3 x86_64 3.13.7-1.fc42 updates 28.7 KiB python3-boolean.py noarch 4.0-13.fc42 fedora 513.1 KiB python3-libs x86_64 3.13.7-1.fc42 updates 40.1 MiB python3-license-expression noarch 30.4.1-2.fc42 fedora 1.1 MiB python3-zstarfile noarch 0.2.0-4.fc42 fedora 23.8 KiB rhash x86_64 1.4.5-2.fc42 fedora 351.0 KiB rocblas x86_64 6.3.0-4.fc42 fedora 3.8 GiB rocm-clang x86_64 18-37.rocm6.3.1.fc42 fedora 117.6 MiB rocm-clang-devel x86_64 18-37.rocm6.3.1.fc42 fedora 21.8 MiB rocm-clang-libs x86_64 18-37.rocm6.3.1.fc42 fedora 113.9 MiB rocm-clang-runtime-devel x86_64 18-37.rocm6.3.1.fc42 fedora 6.9 MiB rocm-comgr x86_64 18-37.rocm6.3.1.fc42 fedora 137.1 MiB rocm-device-libs x86_64 18-37.rocm6.3.1.fc42 fedora 3.2 MiB rocm-hip x86_64 6.3.1-4.fc42 updates 23.3 MiB rocm-libc++ x86_64 18-37.rocm6.3.1.fc42 fedora 1.5 MiB rocm-libc++-devel x86_64 18-37.rocm6.3.1.fc42 fedora 7.0 MiB rocm-lld x86_64 18-37.rocm6.3.1.fc42 fedora 6.5 MiB rocm-llvm x86_64 18-37.rocm6.3.1.fc42 fedora 79.3 MiB rocm-llvm-devel x86_64 18-37.rocm6.3.1.fc42 fedora 24.4 MiB rocm-llvm-filesystem x86_64 18-37.rocm6.3.1.fc42 fedora 0.0 B rocm-llvm-libs x86_64 18-37.rocm6.3.1.fc42 fedora 93.8 MiB rocm-llvm-static x86_64 18-37.rocm6.3.1.fc42 fedora 233.9 MiB rocm-rpm-macros-modules noarch 6.3.1-5.fc42 fedora 24.8 KiB rocm-runtime x86_64 6.3.1-4.fc42 fedora 2.9 MiB rocsolver x86_64 6.3.0-4.fc42 fedora 130.2 MiB tcl x86_64 1:9.0.0-7.fc42 fedora 4.3 MiB tzdata noarch 2025b-1.fc42 fedora 1.6 MiB vim-filesystem noarch 2:9.1.1775-1.fc42 updates 40.0 B Transaction Summary: Installing: 156 packages Total size of inbound packages is 630 MiB. Need to download 630 MiB. After this operation, 5 GiB extra will be used (install 5 GiB, remove 0 B). [ 1/156] rocblas-devel-0:6.3.0-4.fc42. 100% | 9.0 MiB/s | 110.3 KiB | 00m00s [ 2/156] rocm-comgr-devel-0:18-37.rocm 100% | 2.6 MiB/s | 31.5 KiB | 00m00s [ 3/156] hipblas-devel-0:6.3.0-4.fc42. 100% | 6.8 MiB/s | 104.9 KiB | 00m00s [ 4/156] rocminfo-0:6.3.0-2.fc42.x86_6 100% | 9.3 MiB/s | 38.1 KiB | 00m00s [ 5/156] rocm-runtime-devel-0:6.3.1-4. 100% | 12.8 MiB/s | 92.1 KiB | 00m00s [ 6/156] go-rpm-macros-0:3.8.0-1.fc42. 100% | 4.2 MiB/s | 38.3 KiB | 00m00s [ 7/156] go-vendor-tools-0:0.8.0-1.fc4 100% | 15.3 MiB/s | 125.5 KiB | 00m00s [ 8/156] rocm-hip-devel-0:6.3.1-4.fc42 100% | 33.5 MiB/s | 240.5 KiB | 00m00s [ 9/156] systemd-rpm-macros-0:257.9-2. 100% | 5.5 MiB/s | 34.1 KiB | 00m00s [ 10/156] cmake-filesystem-0:3.31.6-2.f 100% | 2.5 MiB/s | 17.6 KiB | 00m00s [ 11/156] hipblas-0:6.3.0-4.fc42.x86_64 100% | 12.9 MiB/s | 158.7 KiB | 00m00s [ 12/156] hipblas-common-devel-0:6.3.0- 100% | 2.1 MiB/s | 12.9 KiB | 00m00s [ 13/156] cmake-0:3.31.6-2.fc42.x86_64 100% | 102.4 MiB/s | 12.2 MiB | 00m00s [ 14/156] gcc-c++-0:15.2.1-1.fc42.x86_6 100% | 115.6 MiB/s | 15.3 MiB | 00m00s [ 15/156] rocm-device-libs-0:18-37.rocm 100% | 8.2 MiB/s | 498.3 KiB | 00m00s [ 16/156] rocm-runtime-0:6.3.1-4.fc42.x 100% | 27.6 MiB/s | 622.1 KiB | 00m00s [ 17/156] kmod-0:33-3.fc42.x86_64 100% | 10.1 MiB/s | 123.7 KiB | 00m00s [ 18/156] cmake-data-0:3.31.6-2.fc42.no 100% | 56.1 MiB/s | 2.5 MiB | 00m00s [ 19/156] jsoncpp-0:1.9.6-1.fc42.x86_64 100% | 9.2 MiB/s | 103.5 KiB | 00m00s [ 20/156] make-1:4.4.1-10.fc42.x86_64 100% | 44.1 MiB/s | 587.0 KiB | 00m00s [ 21/156] rhash-0:1.4.5-2.fc42.x86_64 100% | 17.6 MiB/s | 198.7 KiB | 00m00s [ 22/156] libmpc-0:1.3.1-7.fc42.x86_64 100% | 6.3 MiB/s | 70.9 KiB | 00m00s [ 23/156] rocm-comgr-0:18-37.rocm6.3.1. 100% | 106.8 MiB/s | 29.5 MiB | 00m00s [ 24/156] golist-0:0.10.4-6.fc42.x86_64 100% | 48.0 MiB/s | 1.6 MiB | 00m00s [ 25/156] go-filesystem-0:3.8.0-1.fc42. 100% | 1.2 MiB/s | 8.9 KiB | 00m00s [ 26/156] python3-license-expression-0: 100% | 26.2 MiB/s | 134.0 KiB | 00m00s [ 27/156] python3-zstarfile-0:0.2.0-4.f 100% | 3.6 MiB/s | 18.3 KiB | 00m00s [ 28/156] perl-File-Which-0:1.27-13.fc4 100% | 2.3 MiB/s | 21.6 KiB | 00m00s [ 29/156] perl-PathTools-0:3.91-513.fc4 100% | 9.5 MiB/s | 87.3 KiB | 00m00s [ 30/156] perl-URI-0:5.31-2.fc42.noarch 100% | 17.2 MiB/s | 140.7 KiB | 00m00s [ 31/156] rocm-hip-0:6.3.1-4.fc42.x86_6 100% | 77.8 MiB/s | 9.3 MiB | 00m00s [ 32/156] rocm-rpm-macros-modules-0:6.3 100% | 2.7 MiB/s | 16.8 KiB | 00m00s [ 33/156] gcc-0:15.2.1-1.fc42.x86_64 100% | 77.8 MiB/s | 39.4 MiB | 00m01s [ 34/156] rocm-clang-devel-0:18-37.rocm 100% | 54.1 MiB/s | 2.4 MiB | 00m00s [ 35/156] rocm-lld-0:18-37.rocm6.3.1.fc 100% | 76.0 MiB/s | 1.4 MiB | 00m00s [ 36/156] rocm-llvm-static-0:18-37.rocm 100% | 90.6 MiB/s | 27.5 MiB | 00m00s [ 37/156] numactl-libs-0:2.0.19-2.fc42. 100% | 1.1 MiB/s | 31.3 KiB | 00m00s [ 38/156] emacs-filesystem-1:30.0-4.fc4 100% | 817.3 KiB/s | 7.4 KiB | 00m00s [ 39/156] cpp-0:15.2.1-1.fc42.x86_64 100% | 105.1 MiB/s | 12.9 MiB | 00m00s [ 40/156] python3-boolean.py-0:4.0-13.f 100% | 10.0 MiB/s | 112.5 KiB | 00m00s [ 41/156] perl-Exporter-0:5.78-512.fc42 100% | 3.8 MiB/s | 31.0 KiB | 00m00s [ 42/156] perl-constant-0:1.33-513.fc42 100% | 1.9 MiB/s | 23.0 KiB | 00m00s [ 43/156] perl-Carp-0:1.54-512.fc42.noa 100% | 2.3 MiB/s | 28.9 KiB | 00m00s [ 44/156] perl-Data-Dumper-0:2.189-513. 100% | 4.3 MiB/s | 56.7 KiB | 00m00s [ 45/156] perl-MIME-Base32-0:1.303-23.f 100% | 2.2 MiB/s | 20.5 KiB | 00m00s [ 46/156] perl-MIME-Base64-0:3.16-512.f 100% | 2.9 MiB/s | 29.9 KiB | 00m00s [ 47/156] perl-libnet-0:3.15-513.fc42.n 100% | 12.5 MiB/s | 128.4 KiB | 00m00s [ 48/156] perl-parent-1:0.244-2.fc42.no 100% | 2.5 MiB/s | 15.2 KiB | 00m00s [ 49/156] hipcc-0:18-37.rocm6.3.1.fc42. 100% | 15.7 MiB/s | 128.3 KiB | 00m00s [ 50/156] rocm-llvm-filesystem-0:18-37. 100% | 3.4 MiB/s | 20.8 KiB | 00m00s [ 51/156] rocsolver-0:6.3.0-4.fc42.x86_ 100% | 106.2 MiB/s | 109.9 MiB | 00m01s [ 52/156] rocm-clang-0:18-37.rocm6.3.1. 100% | 83.9 MiB/s | 21.6 MiB | 00m00s [ 53/156] rocblas-0:6.3.0-4.fc42.x86_64 100% | 105.1 MiB/s | 194.3 MiB | 00m02s [ 54/156] rocm-llvm-devel-0:18-37.rocm6 100% | 75.5 MiB/s | 3.9 MiB | 00m00s [ 55/156] rocm-clang-libs-0:18-37.rocm6 100% | 64.2 MiB/s | 22.3 MiB | 00m00s [ 56/156] perl-Digest-MD5-0:2.59-6.fc42 100% | 1.6 MiB/s | 36.0 KiB | 00m00s [ 57/156] rocm-llvm-libs-0:18-37.rocm6. 100% | 70.4 MiB/s | 19.6 MiB | 00m00s [ 58/156] perl-IO-Socket-IP-0:0.43-2.fc 100% | 1.9 MiB/s | 42.4 KiB | 00m00s [ 59/156] perl-Socket-4:2.038-512.fc42. 100% | 2.5 MiB/s | 54.8 KiB | 00m00s [ 60/156] perl-Time-Local-2:1.350-512.f 100% | 5.6 MiB/s | 34.5 KiB | 00m00s [ 61/156] rocm-clang-runtime-devel-0:18 100% | 63.7 MiB/s | 521.8 KiB | 00m00s [ 62/156] rocm-libc++-devel-0:18-37.roc 100% | 74.6 MiB/s | 993.0 KiB | 00m00s [ 63/156] rocm-libc++-0:18-37.rocm6.3.1 100% | 31.4 MiB/s | 353.6 KiB | 00m00s [ 64/156] perl-Digest-0:1.20-512.fc42.n 100% | 2.2 MiB/s | 24.9 KiB | 00m00s [ 65/156] perl-File-Basename-0:2.86-519 100% | 2.1 MiB/s | 17.0 KiB | 00m00s [ 66/156] perl-File-Copy-0:2.41-519.fc4 100% | 1.6 MiB/s | 20.0 KiB | 00m00s [ 67/156] perl-Getopt-Std-0:1.14-519.fc 100% | 1.7 MiB/s | 15.5 KiB | 00m00s [ 68/156] perl-Scalar-List-Utils-5:1.70 100% | 10.4 MiB/s | 74.6 KiB | 00m00s [ 69/156] perl-interpreter-4:5.40.3-519 100% | 10.0 MiB/s | 72.0 KiB | 00m00s [ 70/156] perl-Errno-0:1.38-519.fc42.x8 100% | 2.9 MiB/s | 14.8 KiB | 00m00s [ 71/156] perl-DynaLoader-0:1.56-519.fc 100% | 5.1 MiB/s | 25.9 KiB | 00m00s [ 72/156] perl-Encode-4:3.21-512.fc42.x 100% | 116.9 MiB/s | 1.1 MiB | 00m00s [ 73/156] perl-Getopt-Long-1:2.58-3.fc4 100% | 15.6 MiB/s | 63.7 KiB | 00m00s [ 74/156] perl-Storable-1:3.32-512.fc42 100% | 19.5 MiB/s | 99.6 KiB | 00m00s [ 75/156] perl-Pod-Usage-4:2.05-1.fc42. 100% | 13.2 MiB/s | 40.5 KiB | 00m00s [ 76/156] perl-Text-ParseWords-0:3.31-5 100% | 5.4 MiB/s | 16.5 KiB | 00m00s [ 77/156] rocm-llvm-0:18-37.rocm6.3.1.f 100% | 178.6 MiB/s | 16.1 MiB | 00m00s [ 78/156] perl-Pod-Perldoc-0:3.28.01-51 100% | 4.9 MiB/s | 85.8 KiB | 00m00s [ 79/156] perl-podlators-1:6.0.2-3.fc42 100% | 20.9 MiB/s | 128.6 KiB | 00m00s [ 80/156] groff-base-0:1.23.0-8.fc42.x8 100% | 122.7 MiB/s | 1.1 MiB | 00m00s [ 81/156] perl-File-Temp-1:0.231.100-51 100% | 11.6 MiB/s | 59.2 KiB | 00m00s [ 82/156] perl-HTTP-Tiny-0:0.090-2.fc42 100% | 27.6 MiB/s | 56.5 KiB | 00m00s [ 83/156] perl-Pod-Simple-1:3.45-512.fc 100% | 71.3 MiB/s | 219.0 KiB | 00m00s [ 84/156] perl-Term-ANSIColor-0:5.01-51 100% | 23.3 MiB/s | 47.7 KiB | 00m00s [ 85/156] perl-Term-Cap-0:1.18-512.fc42 100% | 4.3 MiB/s | 22.2 KiB | 00m00s [ 86/156] perl-File-Path-0:2.18-512.fc4 100% | 8.6 MiB/s | 35.2 KiB | 00m00s [ 87/156] perl-IO-Socket-SSL-0:2.089-2. 100% | 74.9 MiB/s | 230.2 KiB | 00m00s [ 88/156] perl-Net-SSLeay-0:1.94-8.fc42 100% | 91.8 MiB/s | 376.0 KiB | 00m00s [ 89/156] perl-Pod-Escapes-1:1.07-512.f 100% | 19.4 MiB/s | 19.8 KiB | 00m00s [ 90/156] perl-Text-Tabs+Wrap-0:2024.00 100% | 10.6 MiB/s | 21.8 KiB | 00m00s [ 91/156] ncurses-0:6.5-5.20250125.fc42 100% | 103.6 MiB/s | 424.5 KiB | 00m00s [ 92/156] perl-overload-0:1.37-519.fc42 100% | 14.8 MiB/s | 45.4 KiB | 00m00s [ 93/156] perl-vars-0:1.05-519.fc42.noa 100% | 6.3 MiB/s | 12.8 KiB | 00m00s [ 94/156] python3-0:3.13.7-1.fc42.x86_6 100% | 15.0 MiB/s | 30.6 KiB | 00m00s [ 95/156] libb2-0:0.98.1-13.fc42.x86_64 100% | 24.8 MiB/s | 25.4 KiB | 00m00s [ 96/156] tzdata-0:2025b-1.fc42.noarch 100% | 174.3 MiB/s | 714.0 KiB | 00m00s [ 97/156] golang-0:1.24.7-1.fc42.x86_64 100% | 109.2 MiB/s | 670.9 KiB | 00m00s [ 98/156] python3-libs-0:3.13.7-1.fc42. 100% | 230.0 MiB/s | 9.2 MiB | 00m00s [ 99/156] perl-libs-4:5.40.3-519.fc42.x 100% | 12.7 MiB/s | 2.3 MiB | 00m00s [100/156] golang-src-0:1.24.7-1.fc42.no 100% | 141.1 MiB/s | 13.1 MiB | 00m00s [101/156] libstdc++-devel-0:15.2.1-1.fc 100% | 69.9 MiB/s | 2.9 MiB | 00m00s [102/156] glibc-devel-0:2.41-11.fc42.x8 100% | 35.8 MiB/s | 623.2 KiB | 00m00s [103/156] golang-bin-0:1.24.7-1.fc42.x8 100% | 179.8 MiB/s | 29.3 MiB | 00m00s [104/156] libdrm-0:2.4.125-1.fc42.x86_6 100% | 4.0 MiB/s | 161.3 KiB | 00m00s [105/156] libpciaccess-0:0.16-15.fc42.x 100% | 1.0 MiB/s | 26.3 KiB | 00m00s [106/156] perl-TermReadKey-0:2.38-24.fc 100% | 11.5 MiB/s | 35.4 KiB | 00m00s [107/156] git-0:2.51.0-2.fc42.x86_64 100% | 5.7 MiB/s | 40.8 KiB | 00m00s [108/156] perl-Git-0:2.51.0-2.fc42.noar 100% | 6.2 MiB/s | 37.9 KiB | 00m00s [109/156] perl-Error-1:0.17030-1.fc42.n 100% | 13.1 MiB/s | 40.4 KiB | 00m00s [110/156] git-core-doc-0:2.51.0-2.fc42. 100% | 168.3 MiB/s | 3.0 MiB | 00m00s [111/156] git-core-0:2.51.0-2.fc42.x86_ 100% | 185.4 MiB/s | 5.0 MiB | 00m00s [112/156] perl-POSIX-0:2.20-519.fc42.x8 100% | 9.5 MiB/s | 97.4 KiB | 00m00s [113/156] perl-Fcntl-0:1.18-519.fc42.x8 100% | 5.8 MiB/s | 29.7 KiB | 00m00s [114/156] perl-FileHandle-0:2.05-519.fc 100% | 7.5 MiB/s | 15.3 KiB | 00m00s [115/156] perl-Symbol-0:1.09-519.fc42.n 100% | 6.9 MiB/s | 14.0 KiB | 00m00s [116/156] perl-IO-0:1.55-519.fc42.x86_6 100% | 39.9 MiB/s | 81.6 KiB | 00m00s [117/156] perl-IPC-Open3-0:1.22-519.fc4 100% | 10.6 MiB/s | 21.7 KiB | 00m00s [118/156] perl-base-0:2.27-519.fc42.noa 100% | 5.2 MiB/s | 16.0 KiB | 00m00s [119/156] perl-if-0:0.61.000-519.fc42.n 100% | 4.5 MiB/s | 13.8 KiB | 00m00s [120/156] perl-AutoLoader-0:5.74-519.fc 100% | 10.3 MiB/s | 21.1 KiB | 00m00s [121/156] perl-B-0:1.89-519.fc42.x86_64 100% | 43.1 MiB/s | 176.5 KiB | 00m00s [122/156] vim-filesystem-2:9.1.1775-1.f 100% | 7.5 MiB/s | 15.4 KiB | 00m00s [123/156] hwdata-0:0.399-1.fc42.noarch 100% | 184.0 MiB/s | 1.7 MiB | 00m00s [124/156] expat-0:2.7.2-1.fc42.x86_64 100% | 19.4 MiB/s | 119.0 KiB | 00m00s [125/156] libuv-1:1.51.0-1.fc42.x86_64 100% | 37.2 MiB/s | 266.3 KiB | 00m00s [126/156] perl-mro-0:1.29-519.fc42.x86_ 100% | 14.5 MiB/s | 29.7 KiB | 00m00s [127/156] mpdecimal-0:4.0.1-1.fc42.x86_ 100% | 19.0 MiB/s | 97.1 KiB | 00m00s [128/156] python-pip-wheel-0:24.3.1-5.f 100% | 171.9 MiB/s | 1.2 MiB | 00m00s [129/156] perl-overloading-0:0.02-519.f 100% | 4.1 MiB/s | 12.7 KiB | 00m00s [130/156] perl-locale-0:1.12-519.fc42.n 100% | 4.4 MiB/s | 13.5 KiB | 00m00s [131/156] perl-SelectSaver-0:1.02-519.f 100% | 2.8 MiB/s | 11.6 KiB | 00m00s [132/156] perl-File-stat-0:1.14-519.fc4 100% | 4.1 MiB/s | 16.9 KiB | 00m00s [133/156] perl-Class-Struct-0:0.68-519. 100% | 5.3 MiB/s | 21.9 KiB | 00m00s [134/156] libxcrypt-devel-0:4.4.38-7.fc 100% | 14.3 MiB/s | 29.4 KiB | 00m00s [135/156] less-0:679-1.fc42.x86_64 100% | 47.7 MiB/s | 195.3 KiB | 00m00s [136/156] kernel-headers-0:6.16.2-200.f 100% | 187.2 MiB/s | 1.7 MiB | 00m00s [137/156] openssh-clients-0:9.9p1-11.fc 100% | 107.0 MiB/s | 767.0 KiB | 00m00s [138/156] libedit-0:3.1-55.20250104cvs. 100% | 17.1 MiB/s | 105.3 KiB | 00m00s [139/156] libfido2-0:1.15.0-3.fc42.x86_ 100% | 32.0 MiB/s | 98.4 KiB | 00m00s [140/156] openssh-0:9.9p1-11.fc42.x86_6 100% | 115.1 MiB/s | 353.6 KiB | 00m00s [141/156] libcbor-0:0.11.0-3.fc42.x86_6 100% | 16.2 MiB/s | 33.3 KiB | 00m00s [142/156] perl-lib-0:0.65-519.fc42.x86_ 100% | 4.8 MiB/s | 14.8 KiB | 00m00s [143/156] lua-filesystem-0:1.8.0-14.fc4 100% | 8.3 MiB/s | 33.9 KiB | 00m00s [144/156] lua-json-0:1.3.4-9.fc42.noarc 100% | 8.5 MiB/s | 26.1 KiB | 00m00s [145/156] Lmod-0:8.7.58-1.fc42.x86_64 100% | 44.3 MiB/s | 271.9 KiB | 00m00s [146/156] lua-term-0:0.08-5.fc42.x86_64 100% | 5.2 MiB/s | 15.9 KiB | 00m00s [147/156] lua-posix-0:36.2.1-8.fc42.x86 100% | 34.2 MiB/s | 140.0 KiB | 00m00s [148/156] lua-lpeg-0:1.1.0-5.fc42.x86_6 100% | 17.4 MiB/s | 71.3 KiB | 00m00s [149/156] lua-0:5.4.8-1.fc42.x86_64 100% | 38.9 MiB/s | 199.2 KiB | 00m00s [150/156] libtommath-0:1.3.1~rc1-5.fc42 100% | 7.0 MiB/s | 64.4 KiB | 00m00s [151/156] tcl-1:9.0.0-7.fc42.x86_64 100% | 77.4 MiB/s | 1.2 MiB | 00m00s [152/156] gcc-plugin-annobin-0:15.2.1-1 100% | 13.6 MiB/s | 55.8 KiB | 00m00s [153/156] procps-ng-0:4.0.4-6.fc42.x86_ 100% | 18.8 MiB/s | 365.3 KiB | 00m00s [154/156] cmake-rpm-macros-0:3.31.6-2.f 100% | 4.1 MiB/s | 16.9 KiB | 00m00s [155/156] annobin-docs-0:12.94-1.fc42.n 100% | 22.1 MiB/s | 90.4 KiB | 00m00s [156/156] annobin-plugin-gcc-0:12.94-1. 100% | 137.0 MiB/s | 981.9 KiB | 00m00s -------------------------------------------------------------------------------- [156/156] Total 100% | 257.4 MiB/s | 630.3 MiB | 00m02s Running transaction [ 1/158] Verify package files 100% | 90.0 B/s | 156.0 B | 00m02s [ 2/158] Prepare transaction 100% | 1.3 KiB/s | 156.0 B | 00m00s [ 3/158] Installing cmake-filesystem-0 100% | 7.4 MiB/s | 7.6 KiB | 00m00s [ 4/158] Installing expat-0:2.7.2-1.fc 100% | 22.6 MiB/s | 300.7 KiB | 00m00s [ 5/158] Installing rocm-llvm-filesyst 100% | 6.8 MiB/s | 13.9 KiB | 00m00s [ 6/158] Installing libmpc-0:1.3.1-7.f 100% | 162.2 MiB/s | 166.1 KiB | 00m00s [ 7/158] Installing rocm-libc++-0:18-3 100% | 76.4 MiB/s | 1.5 MiB | 00m00s [ 8/158] Installing rocm-llvm-libs-0:1 100% | 85.6 MiB/s | 93.8 MiB | 00m01s [ 9/158] Installing rocm-clang-libs-0: 100% | 90.6 MiB/s | 113.9 MiB | 00m01s [ 10/158] Installing lua-0:5.4.8-1.fc42 100% | 42.2 MiB/s | 605.2 KiB | 00m00s [ 11/158] Installing numactl-libs-0:2.0 100% | 0.0 B/s | 53.8 KiB | 00m00s [ 12/158] Installing go-filesystem-0:3. 100% | 0.0 B/s | 392.0 B | 00m00s [ 13/158] Installing make-1:4.4.1-10.fc 100% | 112.5 MiB/s | 1.8 MiB | 00m00s [ 14/158] Installing rocm-comgr-0:18-37 100% | 84.9 MiB/s | 137.1 MiB | 00m02s [ 15/158] Installing lua-term-0:0.08-5. 100% | 24.3 MiB/s | 24.8 KiB | 00m00s [ 16/158] Installing rocm-lld-0:18-37.r 100% | 79.6 MiB/s | 6.5 MiB | 00m00s [ 17/158] Installing rocm-libc++-devel- 100% | 109.2 MiB/s | 7.2 MiB | 00m00s [ 18/158] Installing cpp-0:15.2.1-1.fc4 100% | 375.7 MiB/s | 37.9 MiB | 00m00s [ 19/158] Installing hipblas-common-dev 100% | 0.0 B/s | 17.8 KiB | 00m00s [ 20/158] Installing annobin-docs-0:12. 100% | 0.0 B/s | 100.0 KiB | 00m00s [ 21/158] Installing libtommath-0:1.3.1 100% | 128.4 MiB/s | 131.5 KiB | 00m00s [ 22/158] Installing tcl-1:9.0.0-7.fc42 100% | 180.6 MiB/s | 4.3 MiB | 00m00s [ 23/158] Installing procps-ng-0:4.0.4- 100% | 67.4 MiB/s | 1.0 MiB | 00m00s [ 24/158] Installing lua-lpeg-0:1.1.0-5 100% | 178.9 MiB/s | 183.2 KiB | 00m00s [ 25/158] Installing lua-json-0:1.3.4-9 100% | 61.7 MiB/s | 63.2 KiB | 00m00s [ 26/158] Installing lua-posix-0:36.2.1 100% | 185.0 MiB/s | 568.3 KiB | 00m00s [ 27/158] Installing lua-filesystem-0:1 100% | 65.4 MiB/s | 67.0 KiB | 00m00s [ 28/158] Installing Lmod-0:8.7.58-1.fc 100% | 103.4 MiB/s | 1.3 MiB | 00m00s [ 29/158] Installing rocm-rpm-macros-mo 100% | 27.0 MiB/s | 27.7 KiB | 00m00s [ 30/158] Installing libcbor-0:0.11.0-3 100% | 77.3 MiB/s | 79.2 KiB | 00m00s [ 31/158] Installing libfido2-0:1.15.0- 100% | 237.9 MiB/s | 243.6 KiB | 00m00s [ 32/158] Installing openssh-0:9.9p1-11 100% | 92.1 MiB/s | 1.4 MiB | 00m00s [ 33/158] Installing libedit-0:3.1-55.2 100% | 240.0 MiB/s | 245.8 KiB | 00m00s [ 34/158] Installing openssh-clients-0: 100% | 117.6 MiB/s | 2.7 MiB | 00m00s [ 35/158] Installing less-0:679-1.fc42. 100% | 28.6 MiB/s | 409.4 KiB | 00m00s [ 36/158] Installing git-core-0:2.51.0- 100% | 369.6 MiB/s | 23.7 MiB | 00m00s [ 37/158] Installing git-core-doc-0:2.5 100% | 357.8 MiB/s | 17.9 MiB | 00m00s [ 38/158] Installing kernel-headers-0:6 100% | 243.9 MiB/s | 6.8 MiB | 00m00s [ 39/158] Installing libxcrypt-devel-0: 100% | 16.2 MiB/s | 33.1 KiB | 00m00s [ 40/158] Installing glibc-devel-0:2.41 100% | 179.4 MiB/s | 2.3 MiB | 00m00s [ 41/158] Installing gcc-0:15.2.1-1.fc4 100% | 433.0 MiB/s | 111.3 MiB | 00m00s [ 42/158] Installing python-pip-wheel-0 100% | 622.2 MiB/s | 1.2 MiB | 00m00s [ 43/158] Installing mpdecimal-0:4.0.1- 100% | 213.7 MiB/s | 218.8 KiB | 00m00s [ 44/158] Installing libuv-1:1.51.0-1.f 100% | 279.8 MiB/s | 573.0 KiB | 00m00s [ 45/158] Installing vim-filesystem-2:9 100% | 4.6 MiB/s | 4.7 KiB | 00m00s [ 46/158] Installing hwdata-0:0.399-1.f 100% | 533.1 MiB/s | 9.6 MiB | 00m00s [ 47/158] Installing libpciaccess-0:0.1 100% | 0.0 B/s | 45.9 KiB | 00m00s [ 48/158] Installing libdrm-0:2.4.125-1 100% | 195.1 MiB/s | 399.6 KiB | 00m00s [ 49/158] Installing rocm-runtime-0:6.3 100% | 485.7 MiB/s | 2.9 MiB | 00m00s [ 50/158] Installing rocm-runtime-devel 100% | 185.3 MiB/s | 569.2 KiB | 00m00s [ 51/158] Installing libstdc++-devel-0: 100% | 337.9 MiB/s | 16.2 MiB | 00m00s [ 52/158] Installing golang-src-0:1.24. 100% | 331.2 MiB/s | 80.1 MiB | 00m00s [ 53/158] Installing golang-0:1.24.7-1. 100% | 688.5 MiB/s | 9.0 MiB | 00m00s [ 54/158] Installing golang-bin-0:1.24. 100% | 462.4 MiB/s | 121.6 MiB | 00m00s [ 55/158] Installing tzdata-0:2025b-1.f 100% | 70.1 MiB/s | 1.9 MiB | 00m00s [ 56/158] Installing libb2-0:0.98.1-13. 100% | 9.2 MiB/s | 47.2 KiB | 00m00s [ 57/158] Installing python3-libs-0:3.1 100% | 374.4 MiB/s | 40.4 MiB | 00m00s [ 58/158] Installing python3-0:3.13.7-1 100% | 2.3 MiB/s | 30.5 KiB | 00m00s [ 59/158] Installing cmake-rpm-macros-0 100% | 0.0 B/s | 8.3 KiB | 00m00s [ 60/158] Installing python3-zstarfile- 100% | 26.8 MiB/s | 27.5 KiB | 00m00s [ 61/158] Installing python3-boolean.py 100% | 255.0 MiB/s | 522.2 KiB | 00m00s [ 62/158] Installing python3-license-ex 100% | 363.0 MiB/s | 1.1 MiB | 00m00s [ 63/158] Installing rocm-llvm-0:18-37. 100% | 86.7 MiB/s | 79.3 MiB | 00m01s [ 64/158] Installing rocm-llvm-devel-0: 100% | 104.7 MiB/s | 24.7 MiB | 00m00s [ 65/158] Installing rocm-llvm-static-0 100% | 115.7 MiB/s | 233.9 MiB | 00m02s [ 66/158] Installing ncurses-0:6.5-5.20 100% | 28.6 MiB/s | 614.7 KiB | 00m00s [ 67/158] Installing groff-base-0:1.23. 100% | 125.6 MiB/s | 3.9 MiB | 00m00s [ 68/158] Installing perl-Digest-0:1.20 100% | 36.2 MiB/s | 37.1 KiB | 00m00s [ 69/158] Installing perl-Digest-MD5-0: 100% | 60.1 MiB/s | 61.6 KiB | 00m00s [ 70/158] Installing perl-FileHandle-0: 100% | 0.0 B/s | 9.8 KiB | 00m00s [ 71/158] Installing perl-B-0:1.89-519. 100% | 244.8 MiB/s | 501.3 KiB | 00m00s [ 72/158] Installing perl-MIME-Base32-0 100% | 0.0 B/s | 32.2 KiB | 00m00s [ 73/158] Installing perl-Data-Dumper-0 100% | 114.7 MiB/s | 117.5 KiB | 00m00s [ 74/158] Installing perl-libnet-0:3.15 100% | 287.8 MiB/s | 294.7 KiB | 00m00s [ 75/158] Installing perl-AutoLoader-0: 100% | 0.0 B/s | 20.9 KiB | 00m00s [ 76/158] Installing perl-URI-0:5.31-2. 100% | 131.7 MiB/s | 269.6 KiB | 00m00s [ 77/158] Installing perl-IO-Socket-IP- 100% | 99.8 MiB/s | 102.2 KiB | 00m00s [ 78/158] Installing perl-Time-Local-2: 100% | 0.0 B/s | 70.6 KiB | 00m00s [ 79/158] Installing perl-Text-Tabs+Wra 100% | 0.0 B/s | 23.9 KiB | 00m00s [ 80/158] Installing perl-File-Path-0:2 100% | 0.0 B/s | 64.5 KiB | 00m00s [ 81/158] Installing perl-Pod-Escapes-1 100% | 0.0 B/s | 25.9 KiB | 00m00s [ 82/158] Installing perl-if-0:0.61.000 100% | 0.0 B/s | 6.2 KiB | 00m00s [ 83/158] Installing perl-Net-SSLeay-0: 100% | 271.7 MiB/s | 1.4 MiB | 00m00s [ 84/158] Installing perl-IO-Socket-SSL 100% | 345.4 MiB/s | 707.4 KiB | 00m00s [ 85/158] Installing perl-locale-0:1.12 100% | 0.0 B/s | 6.9 KiB | 00m00s [ 86/158] Installing perl-Term-ANSIColo 100% | 96.9 MiB/s | 99.2 KiB | 00m00s [ 87/158] Installing perl-Term-Cap-0:1. 100% | 0.0 B/s | 30.6 KiB | 00m00s [ 88/158] Installing perl-HTTP-Tiny-0:0 100% | 152.8 MiB/s | 156.4 KiB | 00m00s [ 89/158] Installing perl-File-Temp-1:0 100% | 160.2 MiB/s | 164.1 KiB | 00m00s [ 90/158] Installing perl-POSIX-0:2.20- 100% | 226.9 MiB/s | 232.3 KiB | 00m00s [ 91/158] Installing perl-Class-Struct- 100% | 0.0 B/s | 25.9 KiB | 00m00s [ 92/158] Installing perl-Pod-Simple-1: 100% | 278.5 MiB/s | 570.4 KiB | 00m00s [ 93/158] Installing perl-IPC-Open3-0:1 100% | 0.0 B/s | 23.3 KiB | 00m00s [ 94/158] Installing perl-Socket-4:2.03 100% | 119.1 MiB/s | 122.0 KiB | 00m00s [ 95/158] Installing perl-Symbol-0:1.09 100% | 0.0 B/s | 7.2 KiB | 00m00s [ 96/158] Installing perl-SelectSaver-0 100% | 0.0 B/s | 2.6 KiB | 00m00s [ 97/158] Installing perl-File-stat-0:1 100% | 0.0 B/s | 13.1 KiB | 00m00s [ 98/158] Installing perl-podlators-1:6 100% | 24.1 MiB/s | 321.4 KiB | 00m00s [ 99/158] Installing perl-Pod-Perldoc-0 100% | 12.7 MiB/s | 169.2 KiB | 00m00s [100/158] Installing perl-Text-ParseWor 100% | 0.0 B/s | 14.6 KiB | 00m00s [101/158] Installing perl-Fcntl-0:1.18- 100% | 0.0 B/s | 50.0 KiB | 00m00s [102/158] Installing perl-base-0:2.27-5 100% | 0.0 B/s | 12.9 KiB | 00m00s [103/158] Installing perl-mro-0:1.29-51 100% | 41.6 MiB/s | 42.6 KiB | 00m00s [104/158] Installing perl-overloading-0 100% | 0.0 B/s | 5.5 KiB | 00m00s [105/158] Installing perl-Pod-Usage-4:2 100% | 7.2 MiB/s | 87.9 KiB | 00m00s [106/158] Installing perl-IO-0:1.55-519 100% | 147.7 MiB/s | 151.3 KiB | 00m00s [107/158] Installing perl-constant-0:1. 100% | 0.0 B/s | 27.4 KiB | 00m00s [108/158] Installing perl-MIME-Base64-0 100% | 43.2 MiB/s | 44.3 KiB | 00m00s [109/158] Installing perl-parent-1:0.24 100% | 0.0 B/s | 11.0 KiB | 00m00s [110/158] Installing perl-File-Basename 100% | 0.0 B/s | 14.6 KiB | 00m00s [111/158] Installing perl-Getopt-Std-0: 100% | 0.0 B/s | 11.7 KiB | 00m00s [112/158] Installing perl-Scalar-List-U 100% | 145.2 MiB/s | 148.6 KiB | 00m00s [113/158] Installing perl-Errno-0:1.38- 100% | 0.0 B/s | 8.7 KiB | 00m00s [114/158] Installing perl-vars-0:1.05-5 100% | 0.0 B/s | 4.3 KiB | 00m00s [115/158] Installing perl-Storable-1:3. 100% | 228.4 MiB/s | 233.9 KiB | 00m00s [116/158] Installing perl-overload-0:1. 100% | 0.0 B/s | 71.9 KiB | 00m00s [117/158] Installing perl-Getopt-Long-1 100% | 143.8 MiB/s | 147.2 KiB | 00m00s [118/158] Installing perl-Exporter-0:5. 100% | 0.0 B/s | 55.6 KiB | 00m00s [119/158] Installing perl-Carp-0:1.54-5 100% | 0.0 B/s | 47.7 KiB | 00m00s [120/158] Installing perl-DynaLoader-0: 100% | 0.0 B/s | 32.5 KiB | 00m00s [121/158] Installing perl-Encode-4:3.21 100% | 213.4 MiB/s | 4.7 MiB | 00m00s [122/158] Installing perl-PathTools-0:3 100% | 60.1 MiB/s | 184.5 KiB | 00m00s [123/158] Installing perl-libs-4:5.40.3 100% | 309.1 MiB/s | 9.9 MiB | 00m00s [124/158] Installing perl-interpreter-4 100% | 9.0 MiB/s | 120.1 KiB | 00m00s [125/158] Installing perl-File-Which-0: 100% | 0.0 B/s | 31.4 KiB | 00m00s [126/158] Installing perl-File-Copy-0:2 100% | 0.0 B/s | 20.2 KiB | 00m00s [127/158] Installing perl-TermReadKey-0 100% | 64.6 MiB/s | 66.2 KiB | 00m00s [128/158] Installing perl-Error-1:0.170 100% | 78.1 MiB/s | 80.0 KiB | 00m00s [129/158] Installing perl-lib-0:0.65-51 100% | 0.0 B/s | 8.9 KiB | 00m00s [130/158] Installing perl-Git-0:2.51.0- 100% | 0.0 B/s | 65.4 KiB | 00m00s [131/158] Installing git-0:2.51.0-2.fc4 100% | 56.4 MiB/s | 57.7 KiB | 00m00s [132/158] Installing rocm-clang-runtime 100% | 153.4 MiB/s | 6.9 MiB | 00m00s [133/158] Installing rocm-clang-0:18-37 100% | 92.9 MiB/s | 117.6 MiB | 00m01s [134/158] Installing rocm-clang-devel-0 100% | 132.2 MiB/s | 21.9 MiB | 00m00s [135/158] Installing rocm-device-libs-0 100% | 101.1 MiB/s | 3.2 MiB | 00m00s [136/158] Installing rocm-comgr-devel-0 100% | 101.9 MiB/s | 104.4 KiB | 00m00s [137/158] Installing hipcc-0:18-37.rocm 100% | 29.8 MiB/s | 762.6 KiB | 00m00s [138/158] Installing rocm-hip-0:6.3.1-4 100% | 381.8 MiB/s | 23.3 MiB | 00m00s [139/158] Installing rocblas-0:6.3.0-4. 100% | 191.1 MiB/s | 3.8 GiB | 00m20s [140/158] Installing rocsolver-0:6.3.0- 100% | 49.6 MiB/s | 130.2 MiB | 00m03s [141/158] Installing hipblas-0:6.3.0-4. 100% | 103.0 MiB/s | 1.1 MiB | 00m00s [142/158] Installing emacs-filesystem-1 100% | 0.0 B/s | 544.0 B | 00m00s [143/158] Installing golist-0:0.10.4-6. 100% | 198.6 MiB/s | 4.4 MiB | 00m00s [144/158] Installing rhash-0:1.4.5-2.fc 100% | 24.9 MiB/s | 356.4 KiB | 00m00s [145/158] Installing jsoncpp-0:1.9.6-1. 100% | 32.1 MiB/s | 263.1 KiB | 00m00s [146/158] Installing cmake-data-0:3.31. 100% | 124.2 MiB/s | 9.1 MiB | 00m00s [147/158] Installing cmake-0:3.31.6-2.f 100% | 364.1 MiB/s | 34.2 MiB | 00m00s [148/158] Installing kmod-0:33-3.fc42.x 100% | 18.0 MiB/s | 239.9 KiB | 00m00s [149/158] Installing rocminfo-0:6.3.0-2 100% | 6.4 MiB/s | 78.7 KiB | 00m00s [150/158] Installing go-rpm-macros-0:3. 100% | 8.1 MiB/s | 99.5 KiB | 00m00s [151/158] Installing hipblas-devel-0:6. 100% | 207.6 MiB/s | 3.1 MiB | 00m00s [152/158] Installing rocblas-devel-0:6. 100% | 199.5 MiB/s | 2.8 MiB | 00m00s [153/158] Installing rocm-hip-devel-0:6 100% | 168.0 MiB/s | 2.7 MiB | 00m00s [154/158] Installing go-vendor-tools-0: 100% | 23.7 MiB/s | 339.9 KiB | 00m00s [155/158] Installing gcc-c++-0:15.2.1-1 100% | 372.5 MiB/s | 41.4 MiB | 00m00s [156/158] Installing gcc-plugin-annobin 100% | 4.8 MiB/s | 58.6 KiB | 00m00s [157/158] Installing annobin-plugin-gcc 100% | 69.4 MiB/s | 995.1 KiB | 00m00s [158/158] Installing systemd-rpm-macros 100% | 80.4 KiB/s | 11.3 KiB | 00m00s Complete! Finish: build setup for ollama-0.12.3-1.fc42.src.rpm Start: rpmbuild ollama-0.12.3-1.fc42.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%mkbuilddir): /bin/sh -e /var/tmp/rpm-tmp.oIViS9 Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.K6mena + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd /builddir/build/BUILD/ollama-0.12.3-build + rm -rf ollama-0.12.3 + /usr/lib/rpm/rpmuncompress -x /builddir/build/SOURCES/ollama-0.12.3.tar.gz + STATUS=0 + '[' 0 -ne 0 ']' + cd ollama-0.12.3 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + rm -fr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/vendor + [[ ! -e /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin ]] + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin' + export GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + [[ ! -e /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama ]] ++ dirname /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama' + ln -fs /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + /usr/lib/rpm/rpmuncompress -x /builddir/build/SOURCES/vendor.tar.bz2 + STATUS=0 + '[' 0 -ne 0 ']' + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/remove-runtime-for-cuda-and-rocm.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/replace-library-paths.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + /usr/lib/rpm/rpmuncompress /builddir/build/SOURCES/vendor-pdevine-tensor-fix-cannonical-import-paths.patch + /usr/bin/patch -p1 -s --fuzz=0 --no-backup-if-mismatch -f + cp /builddir/build/SOURCES/LICENSE.sentencepiece convert/sentencepiece/LICENSE + RPM_EC=0 ++ jobs -p + exit 0 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.78J9Sm + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc42.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: Additional repo https_developer_downlo 100% | 19.6 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 19.6 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 7.5 KiB/s | 1.5 KiB | 00m00s fedora 100% | 44.7 KiB/s | 30.9 KiB | 00m01s updates 100% | 17.5 KiB/s | 7.3 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: askalono-cli x86_64 0.5.0-2.fc42 fedora 4.7 MiB Transaction Summary: Installing: 1 package Package "cmake-3.31.6-2.fc42.x86_64" is already installed. Package "gcc-c++-15.2.1-1.fc42.x86_64" is already installed. Package "go-rpm-macros-3.8.0-1.fc42.x86_64" is already installed. Package "go-vendor-tools-0.8.0-1.fc42.noarch" is already installed. Package "hipblas-devel-6.3.0-4.fc42.x86_64" is already installed. Package "rocblas-devel-6.3.0-4.fc42.x86_64" is already installed. Package "rocm-comgr-devel-18-37.rocm6.3.1.fc42.x86_64" is already installed. Package "rocm-hip-devel-6.3.1-4.fc42.x86_64" is already installed. Package "rocm-runtime-devel-6.3.1-4.fc42.x86_64" is already installed. Package "rocminfo-6.3.0-2.fc42.x86_64" is already installed. Package "systemd-rpm-macros-257.9-2.fc42.noarch" is already installed. Total size of inbound packages is 2 MiB. Need to download 2 MiB. After this operation, 5 MiB extra will be used (install 5 MiB, remove 0 B). [1/1] askalono-cli-0:0.5.0-2.fc42.x86_6 100% | 93.3 MiB/s | 2.4 MiB | 00m00s -------------------------------------------------------------------------------- [1/1] Total 100% | 83.7 MiB/s | 2.4 MiB | 00m00s Running transaction [1/3] Verify package files 100% | 166.0 B/s | 1.0 B | 00m00s [2/3] Prepare transaction 100% | 47.0 B/s | 1.0 B | 00m00s [3/3] Installing askalono-cli-0:0.5.0-2 100% | 138.7 MiB/s | 4.7 MiB | 00m00s Complete! Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.eymUrx + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc42.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: Additional repo https_developer_downlo 100% | 19.4 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 19.4 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 7.4 KiB/s | 1.5 KiB | 00m00s fedora 100% | 45.7 KiB/s | 30.9 KiB | 00m01s updates 100% | 17.6 KiB/s | 7.3 KiB | 00m00s Repositories loaded. Nothing to do. Package "askalono-cli-0.5.0-2.fc42.x86_64" is already installed. Package "cmake-3.31.6-2.fc42.x86_64" is already installed. Package "gcc-c++-15.2.1-1.fc42.x86_64" is already installed. Package "go-rpm-macros-3.8.0-1.fc42.x86_64" is already installed. Package "go-vendor-tools-0.8.0-1.fc42.noarch" is already installed. Package "hipblas-devel-6.3.0-4.fc42.x86_64" is already installed. Package "rocblas-devel-6.3.0-4.fc42.x86_64" is already installed. Package "rocm-comgr-devel-18-37.rocm6.3.1.fc42.x86_64" is already installed. Package "rocm-hip-devel-6.3.1-4.fc42.x86_64" is already installed. Package "rocm-runtime-devel-6.3.1-4.fc42.x86_64" is already installed. Package "rocminfo-6.3.0-2.fc42.x86_64" is already installed. Package "systemd-rpm-macros-257.9-2.fc42.noarch" is already installed. Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1759536000 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.OAizOH + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml generate_buildrequires + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.Y2XSeL + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + export 'GO_LDFLAGS= -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release' + GO_LDFLAGS=' -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release' ++ echo ollama-0.12.3-1.fc42-1759536000 ++ sha1sum ++ cut -d ' ' -f1 + GOPATH=/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode + GO111MODULE=on + go build -buildmode pie -compiler gc '-tags=rpm_crashtraceback ' -a -v -ldflags ' -X github.com/ollama/ollama/ml/backend/ggml/ggml/src.libDir=/usr/lib64 -X github.com/ollama/ollama/discover.libDir=/usr/lib64 -X github.com/ollama/ollama/server.mode=release -X github.com/ollama/ollama/version=0.12.3 -B 0xcbc1d9507f3e45e7db25fdabef7358617f43842e -compressdwarf=false -linkmode=external -extldflags '\''-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes '\''' -o /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama github.com/ollama/ollama internal/byteorder internal/unsafeheader internal/goarch internal/cpu internal/coverage/rtcov internal/abi internal/chacha8rand internal/godebugs internal/goexperiment internal/goos internal/profilerecord internal/runtime/atomic internal/asan internal/bytealg internal/msan internal/runtime/math internal/runtime/sys internal/runtime/exithook internal/runtime/syscall internal/stringslite sync/atomic math/bits internal/itoa unicode/utf8 unicode cmp crypto/internal/fips140deps/byteorder math crypto/internal/fips140deps/cpu crypto/internal/fips140/alias crypto/internal/fips140/subtle internal/race internal/runtime/maps internal/sync crypto/internal/boring/sig encoding unicode/utf16 github.com/rivo/uniseg internal/nettrace vendor/golang.org/x/crypto/cryptobyte/asn1 golang.org/x/crypto/internal/alias log/internal log/slog/internal github.com/ollama/ollama/version container/list vendor/golang.org/x/crypto/internal/alias golang.org/x/text/encoding/internal/identifier golang.org/x/text/internal/utf8internal github.com/ollama/ollama/fs hash/maphash image/color runtime golang.org/x/image/math/f64 github.com/gin-gonic/gin/internal/bytesconv golang.org/x/net/html/atom github.com/go-playground/locales/currency github.com/leodido/go-urn/scim/schema github.com/pelletier/go-toml/v2/internal/characters google.golang.org/protobuf/internal/flags github.com/d4l3k/go-bfloat16 google.golang.org/protobuf/internal/set github.com/apache/arrow/go/arrow/internal/debug golang.org/x/xerrors/internal github.com/chewxy/math32 math/cmplx gorgonia.org/vecf64 gonum.org/v1/gonum/blas gonum.org/v1/gonum/internal/asm/c128 gorgonia.org/vecf32 gonum.org/v1/gonum/internal/math32 gonum.org/v1/gonum/internal/asm/f64 gonum.org/v1/gonum/lapack gonum.org/v1/gonum/internal/cmplx64 gonum.org/v1/gonum/internal/asm/f32 gonum.org/v1/gonum/internal/asm/c64 gonum.org/v1/gonum/mathext/internal/amos gonum.org/v1/gonum/mathext/internal/gonum gonum.org/v1/gonum/mathext/internal/cephes github.com/ollama/ollama/server/internal/internal/stringsx github.com/agnivade/levenshtein gonum.org/v1/gonum/mathext internal/reflectlite iter sync crypto/subtle slices weak maps internal/bisect internal/testlog internal/singleflight errors unique sort internal/oserror syscall internal/godebug io strconv crypto/internal/fips140deps/godebug path math/rand/v2 bytes strings hash crypto crypto/internal/randutil reflect math/rand bufio crypto/internal/fips140 time crypto/internal/impl crypto/internal/fips140/sha256 crypto/internal/fips140/sha3 crypto/internal/fips140/sha512 crypto/internal/fips140/hmac internal/syscall/unix internal/syscall/execenv crypto/internal/fips140/check regexp/syntax crypto/internal/fips140/aes crypto/internal/fips140/edwards25519/field crypto/internal/fips140/edwards25519 regexp context io/fs internal/poll internal/filepathlite vendor/golang.org/x/net/dns/dnsmessage net/netip os internal/fmtsort encoding/binary runtime/cgo encoding/base64 golang.org/x/sys/unix encoding/pem crypto/internal/sysrand fmt crypto/internal/entropy crypto/internal/fips140/drbg crypto/internal/fips140/ed25519 crypto/internal/fips140only crypto/internal/fips140/aes/gcm crypto/cipher math/big crypto/internal/boring encoding/json crypto/rand crypto/ed25519 github.com/containerd/console github.com/mattn/go-runewidth encoding/csv crypto/md5 github.com/olekukonko/tablewriter crypto/sha1 database/sql/driver encoding/hex net crypto/aes crypto/des crypto/dsa crypto/internal/fips140/nistec/fiat crypto/internal/boring/bbig crypto/internal/fips140/bigmod crypto/sha3 crypto/internal/fips140hash crypto/sha512 encoding/asn1 crypto/hmac crypto/rc4 crypto/internal/fips140/rsa vendor/golang.org/x/crypto/cryptobyte crypto/rsa crypto/internal/fips140/nistec crypto/sha256 crypto/x509/pkix net/url path/filepath golang.org/x/crypto/chacha20 golang.org/x/crypto/internal/poly1305 golang.org/x/crypto/blowfish golang.org/x/crypto/ssh/internal/bcrypt_pbkdf log log/slog/internal/buffer github.com/ollama/ollama/format log/slog compress/flate hash/crc32 github.com/ollama/ollama/types/model crypto/internal/fips140/ecdh crypto/elliptic crypto/ecdh crypto/internal/fips140/ecdsa compress/gzip golang.org/x/crypto/curve25519 crypto/internal/fips140/hkdf crypto/internal/fips140/mlkem crypto/ecdsa crypto/internal/fips140/tls12 crypto/internal/fips140/tls13 vendor/golang.org/x/crypto/chacha20 vendor/golang.org/x/crypto/internal/poly1305 vendor/golang.org/x/sys/cpu crypto/tls/internal/fips140tls vendor/golang.org/x/text/transform vendor/golang.org/x/crypto/chacha20poly1305 vendor/golang.org/x/text/unicode/bidi vendor/golang.org/x/text/unicode/norm crypto/internal/hpke vendor/golang.org/x/net/http2/hpack vendor/golang.org/x/text/secure/bidirule mime mime/quotedprintable net/http/internal net/http/internal/ascii golang.org/x/sync/errgroup golang.org/x/text/transform golang.org/x/text/encoding golang.org/x/text/encoding/internal golang.org/x/text/runes vendor/golang.org/x/net/idna golang.org/x/text/encoding/unicode os/user golang.org/x/term github.com/ollama/ollama/progress github.com/emirpasic/gods/v2/utils github.com/emirpasic/gods/v2/containers github.com/emirpasic/gods/v2/lists github.com/emirpasic/gods/v2/lists/arraylist flag github.com/ollama/ollama/readline github.com/google/uuid crypto/x509 github.com/ollama/ollama/envconfig net/textproto vendor/golang.org/x/net/http/httpguts vendor/golang.org/x/net/http/httpproxy mime/multipart embed github.com/ollama/ollama/llama/llama.cpp/common github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile golang.org/x/crypto/ssh github.com/ollama/ollama/auth crypto/tls github.com/ollama/ollama/llama/llama.cpp/tools/mtmd net/http/httptrace net/http github.com/ollama/ollama/api github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu github.com/ollama/ollama/parser github.com/ollama/ollama/discover github.com/ollama/ollama/fs/util/bufioutil github.com/ollama/ollama/fs/ggml github.com/ollama/ollama/logutil github.com/ollama/ollama/ml container/heap github.com/dlclark/regexp2/syntax github.com/dlclark/regexp2 github.com/emirpasic/gods/v2/trees github.com/emirpasic/gods/v2/trees/binaryheap github.com/ollama/ollama/model/input github.com/ollama/ollama/kvcache github.com/ollama/ollama/ml/nn/rope github.com/ollama/ollama/ml/nn/pooling image golang.org/x/image/bmp hash/adler32 compress/zlib golang.org/x/image/ccitt golang.org/x/image/tiff/lzw golang.org/x/image/tiff io/ioutil golang.org/x/image/riff golang.org/x/image/vp8 golang.org/x/image/vp8l golang.org/x/image/webp image/internal/imageutil image/jpeg image/png golang.org/x/sync/semaphore os/exec github.com/ollama/ollama/runner/common github.com/ollama/ollama/ml/nn github.com/ollama/ollama/ml/nn/fast image/draw golang.org/x/image/draw github.com/ollama/ollama/model/imageproc encoding/xml github.com/gin-contrib/sse github.com/gin-gonic/gin/internal/json golang.org/x/net/html github.com/gabriel-vasile/mimetype/internal/charset debug/dwarf internal/saferio debug/macho github.com/gabriel-vasile/mimetype/internal/json github.com/gabriel-vasile/mimetype/internal/magic github.com/gabriel-vasile/mimetype github.com/go-playground/locales github.com/go-playground/universal-translator github.com/leodido/go-urn golang.org/x/sys/cpu golang.org/x/crypto/sha3 golang.org/x/text/internal/tag golang.org/x/text/internal/language golang.org/x/text/internal/language/compact golang.org/x/text/language github.com/go-playground/validator/v10 github.com/pelletier/go-toml/v2/internal/danger github.com/pelletier/go-toml/v2/unstable github.com/pelletier/go-toml/v2/internal/tracker github.com/pelletier/go-toml/v2 encoding/gob go/token html text/template/parse text/template html/template net/rpc github.com/ugorji/go/codec hash/fnv google.golang.org/protobuf/internal/detrand google.golang.org/protobuf/internal/errors google.golang.org/protobuf/encoding/protowire google.golang.org/protobuf/internal/pragma google.golang.org/protobuf/reflect/protoreflect google.golang.org/protobuf/internal/encoding/messageset google.golang.org/protobuf/internal/genid google.golang.org/protobuf/internal/order google.golang.org/protobuf/internal/strs google.golang.org/protobuf/reflect/protoregistry google.golang.org/protobuf/runtime/protoiface google.golang.org/protobuf/proto gopkg.in/yaml.v3 github.com/gin-gonic/gin/binding github.com/gin-gonic/gin/render github.com/mattn/go-isatty golang.org/x/text/unicode/bidi golang.org/x/text/secure/bidirule golang.org/x/text/unicode/norm golang.org/x/net/idna golang.org/x/net/http/httpguts golang.org/x/net/http2/hpack golang.org/x/net/internal/httpcommon golang.org/x/net/http2 golang.org/x/net/http2/h2c net/http/httputil github.com/gin-gonic/gin github.com/gin-contrib/cors archive/tar archive/zip golang.org/x/text/encoding/unicode/utf32 github.com/nlpodyssey/gopickle/types github.com/nlpodyssey/gopickle/pickle github.com/nlpodyssey/gopickle/pytorch google.golang.org/protobuf/internal/descfmt google.golang.org/protobuf/internal/descopts google.golang.org/protobuf/internal/editiondefaults google.golang.org/protobuf/internal/encoding/text google.golang.org/protobuf/internal/encoding/defval google.golang.org/protobuf/internal/filedesc google.golang.org/protobuf/encoding/prototext google.golang.org/protobuf/internal/encoding/tag google.golang.org/protobuf/internal/impl google.golang.org/protobuf/internal/filetype google.golang.org/protobuf/internal/version google.golang.org/protobuf/runtime/protoimpl github.com/ollama/ollama/convert/sentencepiece github.com/apache/arrow/go/arrow/endian github.com/apache/arrow/go/arrow/internal/cpu github.com/apache/arrow/go/arrow/memory github.com/apache/arrow/go/arrow/bitutil github.com/apache/arrow/go/arrow/decimal128 github.com/apache/arrow/go/arrow/float16 golang.org/x/xerrors github.com/apache/arrow/go/arrow github.com/apache/arrow/go/arrow/array github.com/apache/arrow/go/arrow/tensor github.com/pkg/errors github.com/xtgo/set github.com/chewxy/hm github.com/google/flatbuffers/go github.com/pdevine/tensor/internal/storage github.com/pdevine/tensor/internal/execution github.com/ollama/ollama/ml/backend/ggml/ggml/src github.com/pdevine/tensor/internal/serialization/fb github.com/gogo/protobuf/proto github.com/gogo/protobuf/protoc-gen-gogo/descriptor google.golang.org/protobuf/types/descriptorpb github.com/gogo/protobuf/gogoproto go4.org/unsafe/assume-no-moving-gc gonum.org/v1/gonum/blas/gonum gonum.org/v1/gonum/floats/scalar gonum.org/v1/gonum/floats google.golang.org/protobuf/internal/editionssupport google.golang.org/protobuf/types/gofeaturespb github.com/x448/float16 golang.org/x/exp/rand google.golang.org/protobuf/reflect/protodesc gonum.org/v1/gonum/stat/combin github.com/ollama/ollama/fs/gguf github.com/golang/protobuf/proto github.com/ollama/ollama/harmony github.com/ollama/ollama/model/parsers github.com/ollama/ollama/model/renderers github.com/ollama/ollama/openai github.com/pdevine/tensor/internal/serialization/pb github.com/ollama/ollama/server/internal/internal/names github.com/ollama/ollama/server/internal/cache/blob runtime/debug github.com/ollama/ollama/server/internal/internal/backoff github.com/ollama/ollama/server/internal/client/ollama github.com/ollama/ollama/template gonum.org/v1/gonum/blas/blas64 gonum.org/v1/gonum/blas/cblas128 github.com/ollama/ollama/server/internal/registry github.com/ollama/ollama/thinking gonum.org/v1/gonum/lapack/gonum github.com/ollama/ollama/tools github.com/ollama/ollama/types/errtypes os/signal github.com/ollama/ollama/types/syncmap github.com/spf13/pflag github.com/spf13/cobra gonum.org/v1/gonum/lapack/lapack64 gonum.org/v1/gonum/mat gonum.org/v1/gonum/stat github.com/pdevine/tensor gonum.org/v1/gonum/stat/distuv github.com/pdevine/tensor/native github.com/ollama/ollama/convert github.com/ollama/ollama/ml/backend/ggml github.com/ollama/ollama/llama/llama.cpp/src github.com/ollama/ollama/ml/backend github.com/ollama/ollama/model github.com/ollama/ollama/model/models/gemma2 github.com/ollama/ollama/model/models/bert github.com/ollama/ollama/model/models/deepseek2 github.com/ollama/ollama/model/models/gemma3 github.com/ollama/ollama/model/models/gemma3n github.com/ollama/ollama/model/models/gptoss github.com/ollama/ollama/model/models/llama github.com/ollama/ollama/model/models/llama4 github.com/ollama/ollama/model/models/mistral3 github.com/ollama/ollama/model/models/mllama github.com/ollama/ollama/model/models/qwen2 github.com/ollama/ollama/model/models/qwen25vl github.com/ollama/ollama/model/models/qwen3 github.com/ollama/ollama/model/models github.com/ollama/ollama/llama github.com/ollama/ollama/sample github.com/ollama/ollama/llm github.com/ollama/ollama/runner/llamarunner github.com/ollama/ollama/runner/ollamarunner github.com/ollama/ollama/server github.com/ollama/ollama/runner github.com/ollama/ollama/cmd github.com/ollama/ollama + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + /usr/bin/cmake -S . -B redhat-linux-build_ggml-cpu -DCMAKE_C_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_CXX_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_Fortran_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_INSTALL_DO_STRIP:BOOL=OFF -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_INSTALL_FULL_SBINDIR:PATH=/usr/bin -DCMAKE_INSTALL_SBINDIR:PATH=bin -DINCLUDE_INSTALL_DIR:PATH=/usr/include -DLIB_INSTALL_DIR:PATH=/usr/lib64 -DSYSCONF_INSTALL_DIR:PATH=/etc -DSHARE_INSTALL_PREFIX:PATH=/usr/share -DLIB_SUFFIX=64 -DBUILD_SHARED_LIBS:BOOL=ON --preset CPU Preset CMake variables: CMAKE_BUILD_TYPE="Release" CMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded" -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - /usr/lib64/rocm/llvm/bin/clang++ -- Configuring done (7.4s) -- Generating done (0.0s) -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu CMake Warning: Manually-specified variables were not used by the project: CMAKE_Fortran_FLAGS_RELEASE CMAKE_INSTALL_DO_STRIP INCLUDE_INSTALL_DIR LIB_SUFFIX SHARE_INSTALL_PREFIX SYSCONF_INSTALL_DIR + /usr/bin/cmake --build redhat-linux-build_ggml-cpu -j4 --verbose --target ggml-cpu Change Dir: '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' Run Build Command(s): /usr/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f Makefile -j4 ggml-cpu /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/gmake -f CMakeFiles/Makefile2 ggml-cpu gmake[1]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/CMakeFiles 99 /usr/bin/gmake -f CMakeFiles/Makefile2 ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/all gmake[2]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/DependInfo.cmake "--color=" cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 1%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF CMakeFiles/ggml-base.dir/ggml.c.o.d -o CMakeFiles/ggml-base.dir/ggml.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5663:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5663 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5656:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5656 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 3%] Built target ggml-cpu-x64-feats /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/depend [ 3%] Built target ggml-cpu-alderlake-feats gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o [ 4%] Built target ggml-cpu-sse42-feats /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/depend cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o -MF CMakeFiles/ggml-base.dir/ggml.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/build cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/build [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 5%] Built target ggml-cpu-sandybridge-feats gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 5%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c [ 5%] Built target ggml-cpu-haswell-feats [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 6%] Built target ggml-cpu-skylakex-feats [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/depend /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SSE42 -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -MF CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o.d -o CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 8%] Built target ggml-cpu-icelake-feats [ 9%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-threading.cpp [ 9%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -MF CMakeFiles/ggml-base.dir/gguf.cpp.o.d -o CMakeFiles/ggml-base.dir/gguf.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:4067:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4067 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-base.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-base.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -Wl,-soname,libggml-base.so -o ../../../../../lib/ollama/libggml-base.so "CMakeFiles/ggml-base.dir/ggml.c.o" "CMakeFiles/ggml-base.dir/ggml.cpp.o" "CMakeFiles/ggml-base.dir/ggml-alloc.c.o" "CMakeFiles/ggml-base.dir/ggml-backend.cpp.o" "CMakeFiles/ggml-base.dir/ggml-opt.cpp.o" "CMakeFiles/ggml-base.dir/ggml-threading.cpp.o" "CMakeFiles/ggml-base.dir/ggml-quants.c.o" "CMakeFiles/ggml-base.dir/gguf.cpp.o" -lm gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 11%] Built target ggml-base /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/depend /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/DependInfo.cmake "--color=" gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/build gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 12%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o [ 13%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c [ 14%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c [ 15%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 19%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o [ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 23%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 24%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 25%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 26%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 27%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 32%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 35%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 36%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 39%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 41%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 42%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 44%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 45%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp [ 46%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 49%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 51%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 52%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 53%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o [ 54%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_x64_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 55%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sse42_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 56%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-x64.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-x64.dir/link.txt --verbose=1 In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 57%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sse42.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-sse42.dir/link.txt --verbose=1 [ 58%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 59%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c [ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_sandybridge_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mavx -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 61%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_alderlake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavxvnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp [ 62%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sandybridge.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-sandybridge.dir/link.txt --verbose=1 In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 62%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-alderlake.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-alderlake.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-x64.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-x64.so "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 62%] Built target ggml-cpu-x64 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-sse42.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-sse42.so "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 62%] Built target ggml-cpu-sse42 /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 63%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 64%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 65%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c [ 65%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-sandybridge.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-sandybridge.so "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 65%] Built target ggml-cpu-sandybridge In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 66%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp [ 67%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 68%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 69%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp [ 70%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c [ 71%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 73%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-alderlake.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-alderlake.so "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [ 73%] Built target ggml-cpu-alderlake [ 74%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp [ 75%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp [ 77%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 78%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 79%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp [ 81%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp [ 84%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 84%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 86%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp [ 86%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 87%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 88%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/common.h:57:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 57 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 91%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp [ 92%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_haswell_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 94%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 94%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-haswell.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-haswell.dir/link.txt --verbose=1 [ 95%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 96%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c [ 97%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_skylakex_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 98%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_AVX -DGGML_AVX2 -DGGML_AVX512 -DGGML_AVX512_VBMI -DGGML_AVX512_VNNI -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_BMI2 -DGGML_COMMIT=0x0 -DGGML_F16C -DGGML_FMA -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_SSE42 -DGGML_USE_LLAMAFILE -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_icelake_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -msse4.2 -mf16c -mfma -mbmi2 -mavx -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o -MF CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o.d -o CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-skylakex.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-skylakex.dir/link.txt --verbose=1 [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-icelake.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-cpu-icelake.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-haswell.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-haswell.so "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-haswell /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-skylakex.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-skylakex.so "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-skylakex /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-cpu-icelake.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../lib/ollama/libggml-cpu-icelake.so "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o" "CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o" "CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/lib/ollama: ../../../../../lib/ollama/libggml-base.so gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu-icelake /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' gmake[3]: Nothing to be done for 'ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu.dir/build'. gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' [100%] Built target ggml-cpu gmake[2]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu/CMakeFiles 0 gmake[1]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-cpu' + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + /usr/bin/cmake -S . -B redhat-linux-build_ggml-rocm-6 -DCMAKE_C_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_CXX_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_Fortran_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_INSTALL_DO_STRIP:BOOL=OFF -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_INSTALL_FULL_SBINDIR:PATH=/usr/bin -DCMAKE_INSTALL_SBINDIR:PATH=bin -DINCLUDE_INSTALL_DIR:PATH=/usr/include -DLIB_INSTALL_DIR:PATH=/usr/lib64 -DSYSCONF_INSTALL_DIR:PATH=/etc -DSHARE_INSTALL_PREFIX:PATH=/usr/share -DLIB_SUFFIX=64 -DBUILD_SHARED_LIBS:BOOL=ON --preset 'ROCm 6' Preset CMake variables: AMDGPU_TARGETS="gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1151;gfx1200;gfx1201;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-" CMAKE_BUILD_TYPE="Release" CMAKE_HIP_FLAGS="-parallel-jobs=4" CMAKE_HIP_PLATFORM="amd" CMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded" -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - /usr/lib64/rocm/llvm/bin/clang++ -- The HIP compiler identification is Clang 18.0.0 -- Detecting HIP compiler ABI info -- Detecting HIP compiler ABI info - done -- Check for working HIP compiler: /usr/lib64/rocm/llvm/bin/clang++ - skipped -- Detecting HIP compile features -- Detecting HIP compile features - done -- HIP and hipBLAS found -- Configuring done (2.9s) -- Generating done (0.0s) -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 CMake Warning: Manually-specified variables were not used by the project: CMAKE_Fortran_FLAGS_RELEASE CMAKE_INSTALL_DO_STRIP INCLUDE_INSTALL_DIR LIB_SUFFIX SHARE_INSTALL_PREFIX SYSCONF_INSTALL_DIR + /usr/bin/cmake --build redhat-linux-build_ggml-rocm-6 -j4 --verbose --target ggml-hip Change Dir: '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' Run Build Command(s): /usr/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f Makefile -j4 ggml-hip /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/gmake -f CMakeFiles/Makefile2 ggml-hip gmake[1]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/cmake -S/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 -B/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/CMakeFiles 47 /usr/bin/gmake -f CMakeFiles/Makefile2 ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/all gmake[2]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/DependInfo.cmake "--color=" gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/gmake -f ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build.make ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [ 4%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o [ 4%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF CMakeFiles/ggml-base.dir/ggml.c.o.d -o CMakeFiles/ggml-base.dir/ggml.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o -MF CMakeFiles/ggml-base.dir/ggml.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5663:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5663 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5656:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5656 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-threading.cpp [ 6%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/gcc -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:4067:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4067 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/g++ -DGGML_BACKEND_DL -DGGML_BUILD -DGGML_COMMIT=0x0 -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -std=c++17 -fPIC -MD -MT ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -MF CMakeFiles/ggml-base.dir/gguf.cpp.o.d -o CMakeFiles/ggml-base.dir/gguf.cpp.o -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp In file included from /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:282:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 282 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:261:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 261 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:256:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 256 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:187:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 187 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:150:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 150 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:140:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 140 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:135:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 135 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:129:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 129 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-base.dir/link.txt --verbose=1 /usr/bin/g++ -fPIC -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -DNDEBUG -Wl,--dependency-file=CMakeFiles/ggml-base.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -Wl,-soname,libggml-base.so -o ../../../../../lib/ollama/libggml-base.so "CMakeFiles/ggml-base.dir/ggml.c.o" "CMakeFiles/ggml-base.dir/ggml.cpp.o" "CMakeFiles/ggml-base.dir/ggml-alloc.c.o" "CMakeFiles/ggml-base.dir/ggml-backend.cpp.o" "CMakeFiles/ggml-base.dir/ggml-opt.cpp.o" "CMakeFiles/ggml-base.dir/ggml-threading.cpp.o" "CMakeFiles/ggml-base.dir/ggml-quants.c.o" "CMakeFiles/ggml-base.dir/gguf.cpp.o" -lm gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [ 8%] Built target ggml-base /usr/bin/gmake -f ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/build.make ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/depend gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6 /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/DependInfo.cmake "--color=" Dependee "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/DependInfo.cmake" is newer than depender "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/depend.internal". Dependee "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/CMakeDirectoryInformation.cmake" is newer than depender "/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/depend.internal". Scanning dependencies of target ggml-hip gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/gmake -f ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/build.make ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/build gmake[3]: Entering directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [ 8%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/acc.cu.o [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/arange.cu.o [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/add-id.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/add-id.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/add-id.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/arange.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/arange.cu [ 10%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/argmax.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/acc.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/acc.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/argmax.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/argmax.cu [ 12%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/argsort.cu.o [ 12%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/binbcast.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/binbcast.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cu cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/argsort.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/argsort.cu [ 14%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/clamp.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/clamp.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/clamp.cu [ 14%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/concat.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/concat.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/concat.cu [ 14%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv-transpose-1d.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv-transpose-1d.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/conv-transpose-1d.cu [ 17%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-dw.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-dw.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-dw.cu [ 17%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-transpose.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-transpose.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-transpose.cu [ 19%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/convert.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/convert.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/convert.cu [ 19%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/count-equal.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/count-equal.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/count-equal.cu [ 21%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/cpy.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/cpy.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu [ 21%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/cross-entropy-loss.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/cross-entropy-loss.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/cross-entropy-loss.cu [ 23%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/diagmask.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/diagmask.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/diagmask.cu [ 23%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile-f16.cu [ 23%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f32.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f32.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile-f32.cu [ 25%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-wmma-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-wmma-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu [ 25%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/fattn.cu [ 27%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/getrows.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/getrows.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/getrows.cu [ 27%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/ggml-cuda.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/ggml-cuda.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx1030. 1 warning generated when compiling for gfx1010. 1 warning generated when compiling for gfx1100. 1 warning generated when compiling for gfx1012. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx1200. 1 warning generated when compiling for gfx1102. 1 warning generated when compiling for gfx1151. 1 warning generated when compiling for gfx1101. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx900. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx906. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx1201. 1 warning generated when compiling for gfx908. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx90a. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx90a. /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx940. 1 warning generated when compiling for gfx941. [ 29%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/gla.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/gla.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/gla.cu [ 29%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/im2col.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/im2col.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/im2col.cu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for gfx942. [ 29%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mean.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mean.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mean.cu /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension] 147 | char archName[archLen + 1]; | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:147:19: note: read of non-const variable 'archLen' is not allowed in a constant expression /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:146:9: note: declared here 146 | int archLen = strlen(devName); | ^ 1 warning generated when compiling for host. [ 31%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmf.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmf.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmf.cu [ 31%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmq.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmq.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu [ 34%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvf.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvf.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmvf.cu [ 34%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvq.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvq.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/mmvq.cu [ 36%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/norm.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/norm.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/norm.cu [ 36%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/opt-step-adamw.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/opt-step-adamw.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-adamw.cu [ 38%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/out-prod.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/out-prod.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/out-prod.cu [ 38%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/pad.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/pad.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/pad.cu [ 38%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/pool2d.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/pool2d.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/pool2d.cu [ 40%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/quantize.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/quantize.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/quantize.cu [ 40%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/roll.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/roll.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/roll.cu [ 42%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/rope.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/rope.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/rope.cu [ 42%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/scale.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/scale.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/scale.cu [ 44%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/set-rows.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/set-rows.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/set-rows.cu [ 44%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/softcap.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/softcap.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/softcap.cu [ 46%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/softmax.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/softmax.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/softmax.cu [ 46%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-conv.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-conv.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ssm-conv.cu [ 46%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-scan.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-scan.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/ssm-scan.cu [ 48%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/sum.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/sum.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/sum.cu [ 48%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/sumrows.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/sumrows.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/sumrows.cu [ 51%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/tsembd.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/tsembd.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/tsembd.cu [ 51%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/unary.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/unary.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/unary.cu [ 53%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/upscale.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/upscale.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/upscale.cu [ 53%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/wkv.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/wkv.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/wkv.cu [ 53%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu [ 55%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu [ 55%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu [ 57%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu [ 57%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu [ 59%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu [ 59%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu [ 61%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu [ 61%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu [ 61%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu [ 63%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu [ 63%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu [ 65%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu [ 65%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu [ 68%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu [ 68%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu [ 68%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu [ 70%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu [ 70%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu [ 72%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq1_s.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq1_s.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq1_s.cu [ 72%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_s.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_s.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_s.cu [ 74%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu [ 74%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu [ 76%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_s.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_s.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_s.cu [ 76%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu [ 76%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu [ 78%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu [ 78%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-mxfp4.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-mxfp4.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-mxfp4.cu [ 80%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q2_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q2_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q2_k.cu [ 80%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q3_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q3_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q3_k.cu [ 82%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_0.cu [ 82%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_1.cu [ 82%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_k.cu [ 85%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_0.cu [ 85%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_1.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_1.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_1.cu [ 87%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_k.cu [ 87%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q6_k.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q6_k.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q6_k.cu [ 89%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q8_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q8_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q8_0.cu [ 89%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu [ 91%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu [ 91%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu [ 91%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu [ 93%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu [ 93%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu [ 95%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu [ 95%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu [ 97%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu [ 97%] Building HIP object ml/backend/ggml/ggml/src/ggml-hip/CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.o cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/lib64/rocm/llvm/bin/clang++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_HIP_NO_MMQ_MFMA -DGGML_HIP_NO_VMM -DGGML_SHARED -DGGML_USE_HIP -DGGML_VERSION=0x0 -DNDEBUG -DUSE_PROF_API=1 -D__HIP_PLATFORM_AMD__=1 -D__HIP_ROCclr__=1 -Dggml_hip_EXPORTS -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/include -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-hip/.. -I/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/../include -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG -std=c++17 --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- -fPIC -o CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.o -x hip -c /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu [100%] Linking HIP shared module ../../../../../../lib/ollama/libggml-hip.so cd /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/ml/backend/ggml/ggml/src/ggml-hip && /usr/bin/cmake -E cmake_link_script CMakeFiles/ggml-hip.dir/link.txt --verbose=1 /usr/lib64/rocm/llvm/bin/clang++ -fPIC -parallel-jobs=4 --gpu-max-threads-per-block=1024 -O3 -DNDEBUG --offload-arch=gfx900 --offload-arch=gfx940 --offload-arch=gfx941 --offload-arch=gfx942 --offload-arch=gfx1010 --offload-arch=gfx1012 --offload-arch=gfx1030 --offload-arch=gfx1100 --offload-arch=gfx1101 --offload-arch=gfx1102 --offload-arch=gfx1151 --offload-arch=gfx1200 --offload-arch=gfx1201 --offload-arch=gfx906:xnack- --offload-arch=gfx908:xnack- --offload-arch=gfx90a:xnack+ --offload-arch=gfx90a:xnack- --hip-link --rtlib=compiler-rt -unwindlib=libgcc -Xlinker --dependency-file=CMakeFiles/ggml-hip.dir/link.d -Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes -shared -o ../../../../../../lib/ollama/libggml-hip.so "CMakeFiles/ggml-hip.dir/__/ggml-cuda/acc.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/add-id.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/arange.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/argmax.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/argsort.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/binbcast.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/clamp.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/concat.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv-transpose-1d.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-dw.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/conv2d-transpose.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/convert.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/count-equal.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/cpy.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/cross-entropy-loss.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/diagmask.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-tile-f32.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn-wmma-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/fattn.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/getrows.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/ggml-cuda.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/gla.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/im2col.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mean.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmf.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmq.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvf.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/mmvq.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/norm.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/opt-step-adamw.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/out-prod.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/pad.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/pool2d.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/quantize.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/roll.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/rope.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/scale.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/set-rows.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/softcap.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/softmax.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-conv.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/ssm-scan.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/sum.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/sumrows.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/tsembd.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/unary.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/upscale.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/wkv.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq1_s.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_s.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_s.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-mxfp4.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q2_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q3_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q4_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_1.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q5_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q6_k.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/mmq-instance-q8_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o" "CMakeFiles/ggml-hip.dir/__/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.o" -Wl,-rpath,/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/lib/ollama: ../../../../../../lib/ollama/libggml-base.so /usr/lib64/libhipblas.so.2.3clang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-hardened-ld' [-Wunused-command-line-argument] clang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1' [-Wunused-command-line-argument] clang++: warning: argument unused during compilation: '-specs=/usr/lib/rpm/redhat/redhat-package-notes' [-Wunused-command-line-argument] /usr/lib64/librocblas.so.4.3 /usr/lib64/libamdhip64.so.6.3.42133 /usr/lib64/libamdhip64.so.6.3.42133 gmake[3]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' [100%] Built target ggml-hip gmake[2]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' /usr/bin/cmake -E cmake_progress_start /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6/CMakeFiles 0 gmake[1]: Leaving directory '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/redhat-linux-build_ggml-rocm-6' + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.wAefHB + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + '[' /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT '!=' / ']' + rm -rf /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT ++ dirname /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + mkdir -p /builddir/build/BUILD/ollama-0.12.3-build + mkdir /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml install --destdir /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT --install-directory /usr/share/licenses/ollama --filelist licenses.list Using detector: askalono + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin' + install -m 0755 -vp /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ '/builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin/ollama' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ollama' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig' + install -m 0644 -vp /builddir/build/SOURCES/sysconfig-ollama /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig/ollama '/builddir/build/SOURCES/sysconfig-ollama' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/etc/sysconfig/ollama' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system' + install -m 0644 -vp /builddir/build/SOURCES/ollama.service /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system/ollama.service '/builddir/build/SOURCES/ollama.service' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/systemd/system/ollama.service' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d' + install -m 0644 -vp /builddir/build/SOURCES/ollama-user.conf /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d/ollama.conf '/builddir/build/SOURCES/ollama-user.conf' -> '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib/sysusers.d/ollama.conf' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var' install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib' + install -m 0755 -vd /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib/ollama install: creating directory '/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/var/lib/ollama' + DESTDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + /usr/bin/cmake --install redhat-linux-build_ggml-cpu --component CPU -- Install configuration: "Release" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-base.so -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so" to "" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so" to "" + DESTDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT + /usr/bin/cmake --install redhat-linux-build_ggml-rocm-6 --component HIP -- Install configuration: "Release" -- Installing: /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-hip.so -- Set non-toolchain portion of runtime path of "/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-hip.so" to "" + /usr/bin/find-debuginfo -j4 --strict-build-id -m -i --build-id-seed 0.12.3-1.fc42 --unique-debug-suffix -0.12.3-1.fc42.x86_64 --unique-debug-src-base ollama-0.12.3-1.fc42.x86_64 --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 -S debugsourcefiles.list /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3 find-debuginfo: starting Extracting debug info from 10 files Error while writing index for `/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-hip.so': No debugging symbols gdb-add-index: No index was created for /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/lib64/ollama/libggml-hip.so gdb-add-index: [Was there no debuginfo? Was there already an index?] warning: Unsupported auto-load script at offset 0 in section .debug_gdb_scripts of file /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/bin/ollama. Use `info auto-load python-scripts [REGEXP]' to list them. DWARF-compressing 10 files sepdebugcrcfix: Updated 9 CRC32s, 1 CRC32s did match. Creating .debug symlinks for symlinks to ELF files Copying sources found by 'debugedit -l' to /usr/src/debug/ollama-0.12.3-1.fc42.x86_64 find-debuginfo: done + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/redhat/brp-strip-lto /usr/bin/strip + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/check-rpaths + /usr/lib/rpm/redhat/brp-mangle-shebangs + /usr/lib/rpm/brp-remove-la-files + env /usr/lib/rpm/redhat/brp-python-bytecompile '' 1 0 -j4 + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/bin/add-determinism --brp -j4 /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT Scanned 497 directories and 1628 files, processed 0 inodes, 0 modified (0 replaced + 0 rewritten), 0 unsupported format, 0 errors Reading /builddir/build/BUILD/ollama-0.12.3-build/SPECPARTS/rpm-debuginfo.specpart Executing(%check): /bin/sh -e /var/tmp/rpm-tmp.g0bGYl + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + go_vendor_license --config /builddir/build/SOURCES/go-vendor-tools.toml report all --verify 'Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSL-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC0-1.0 AND ISC AND LicenseRef-Fedora-Public-Domain AND LicenseRef-scancode-protobuf AND MIT AND NCSA AND NTP AND OpenSSL AND ZPL-2.1 AND Zlib' Using detector: askalono LICENSE: MIT convert/sentencepiece/LICENSE: Apache-2.0 llama/llama.cpp/LICENSE: MIT ml/backend/ggml/ggml/LICENSE: MIT vendor/github.com/agnivade/levenshtein/License.txt: MIT vendor/github.com/apache/arrow/go/arrow/LICENSE.txt: (Apache-2.0 AND BSD-3-Clause) AND BSD-3-Clause AND CC0-1.0 AND (LicenseRef-scancode-public-domain AND MIT) AND Apache-2.0 AND BSL-1.0 AND (BSD-2-Clause AND BSD-3-Clause) AND MIT AND (BSL-1.0 AND BSD-2-Clause) AND BSD-2-Clause AND ZPL-2.1 AND LicenseRef-scancode-protobuf AND NCSA AND (CC-BY-3.0 AND MIT) AND (CC-BY-4.0 AND LicenseRef-scancode-public-domain) AND NTP AND Zlib AND OpenSSL AND (BSD-3-Clause AND BSD-2-Clause) AND (BSD-2-Clause AND Zlib) vendor/github.com/bytedance/sonic/LICENSE: Apache-2.0 vendor/github.com/bytedance/sonic/loader/LICENSE: Apache-2.0 vendor/github.com/chewxy/hm/LICENCE: MIT vendor/github.com/chewxy/math32/LICENSE: BSD-2-Clause vendor/github.com/cloudwego/base64x/LICENSE: Apache-2.0 vendor/github.com/cloudwego/base64x/LICENSE-APACHE: Apache-2.0 vendor/github.com/cloudwego/iasm/LICENSE-APACHE: Apache-2.0 vendor/github.com/containerd/console/LICENSE: Apache-2.0 vendor/github.com/d4l3k/go-bfloat16/LICENSE: MIT vendor/github.com/davecgh/go-spew/LICENSE: ISC vendor/github.com/dlclark/regexp2/LICENSE: MIT vendor/github.com/emirpasic/gods/v2/LICENSE: BSD-2-Clause AND ISC vendor/github.com/gabriel-vasile/mimetype/LICENSE: MIT vendor/github.com/gin-contrib/cors/LICENSE: MIT vendor/github.com/gin-contrib/sse/LICENSE: MIT vendor/github.com/gin-gonic/gin/LICENSE: MIT vendor/github.com/go-playground/locales/LICENSE: MIT vendor/github.com/go-playground/universal-translator/LICENSE: MIT vendor/github.com/go-playground/validator/v10/LICENSE: MIT vendor/github.com/goccy/go-json/LICENSE: MIT vendor/github.com/gogo/protobuf/LICENSE: BSD-3-Clause vendor/github.com/golang/protobuf/LICENSE: BSD-3-Clause vendor/github.com/google/flatbuffers/LICENSE: Apache-2.0 vendor/github.com/google/go-cmp/LICENSE: BSD-3-Clause vendor/github.com/google/uuid/LICENSE: BSD-3-Clause vendor/github.com/inconshreveable/mousetrap/LICENSE: Apache-2.0 vendor/github.com/json-iterator/go/LICENSE: MIT vendor/github.com/klauspost/cpuid/v2/LICENSE: MIT vendor/github.com/leodido/go-urn/LICENSE: MIT vendor/github.com/mattn/go-isatty/LICENSE: MIT vendor/github.com/mattn/go-runewidth/LICENSE: MIT vendor/github.com/modern-go/concurrent/LICENSE: Apache-2.0 vendor/github.com/modern-go/reflect2/LICENSE: Apache-2.0 vendor/github.com/nlpodyssey/gopickle/LICENSE: BSD-2-Clause vendor/github.com/olekukonko/tablewriter/LICENSE.md: MIT vendor/github.com/pdevine/tensor/LICENCE: Apache-2.0 vendor/github.com/pelletier/go-toml/v2/LICENSE: MIT vendor/github.com/pkg/errors/LICENSE: BSD-2-Clause vendor/github.com/pmezard/go-difflib/LICENSE: BSD-3-Clause vendor/github.com/rivo/uniseg/LICENSE.txt: MIT vendor/github.com/spf13/cobra/LICENSE.txt: Apache-2.0 vendor/github.com/spf13/pflag/LICENSE: BSD-3-Clause vendor/github.com/stretchr/testify/LICENSE: MIT vendor/github.com/twitchyliquid64/golang-asm/LICENSE: BSD-3-Clause vendor/github.com/ugorji/go/codec/LICENSE: MIT vendor/github.com/x448/float16/LICENSE: MIT vendor/github.com/xtgo/set/LICENSE: BSD-2-Clause vendor/go4.org/unsafe/assume-no-moving-gc/LICENSE: BSD-3-Clause vendor/golang.org/x/arch/LICENSE: BSD-3-Clause vendor/golang.org/x/crypto/LICENSE: BSD-3-Clause vendor/golang.org/x/exp/LICENSE: BSD-3-Clause vendor/golang.org/x/image/LICENSE: BSD-3-Clause vendor/golang.org/x/net/LICENSE: BSD-3-Clause vendor/golang.org/x/sync/LICENSE: BSD-3-Clause vendor/golang.org/x/sys/LICENSE: BSD-3-Clause vendor/golang.org/x/term/LICENSE: BSD-3-Clause vendor/golang.org/x/text/LICENSE: BSD-3-Clause vendor/golang.org/x/tools/LICENSE: BSD-3-Clause vendor/golang.org/x/xerrors/LICENSE: BSD-3-Clause vendor/gonum.org/v1/gonum/LICENSE: BSD-3-Clause vendor/google.golang.org/protobuf/LICENSE: BSD-3-Clause vendor/gopkg.in/yaml.v3/LICENSE: MIT AND (MIT AND Apache-2.0) vendor/gorgonia.org/vecf32/LICENSE: MIT vendor/gorgonia.org/vecf64/LICENSE: MIT Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSL-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC0-1.0 AND ISC AND LicenseRef-Fedora-Public-Domain AND LicenseRef-scancode-protobuf AND MIT AND NCSA AND NTP AND OpenSSL AND ZPL-2.1 AND Zlib + GO_LDFLAGS=' -X github.com/ollama/ollama/version=0.12.3' + GO_TEST_FLAGS='-buildmode pie -compiler gc' + GO_TEST_EXT_LD_FLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + go-rpm-integration check -i github.com/ollama/ollama -b /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin -s /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build -V 0.12.3-1.fc42 -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT -g /usr/share/gocode -r '.*example.*' Testing in: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src PATH: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/bin:/usr/bin:/bin:/usr/sbin:/sbin GOPATH: /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build:/usr/share/gocode GO111MODULE: off command: go test -buildmode pie -compiler gc -ldflags " -X github.com/ollama/ollama/version=0.12.3 -extldflags '-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes '" testing: github.com/ollama/ollama github.com/ollama/ollama/api 2025/10/04 05:41:03 http: superfluous response.WriteHeader call from github.com/ollama/ollama/api.TestClientStream.func1.1 (client_test.go:128) PASS ok github.com/ollama/ollama/api 0.012s github.com/ollama/ollama/api 2025/10/04 05:41:03 http: superfluous response.WriteHeader call from github.com/ollama/ollama/api.TestClientStream.func1.1 (client_test.go:128) PASS ok github.com/ollama/ollama/api 0.010s github.com/ollama/ollama/app/assets ? github.com/ollama/ollama/app/assets [no test files] github.com/ollama/ollama/app/lifecycle PASS ok github.com/ollama/ollama/app/lifecycle 0.004s github.com/ollama/ollama/app/lifecycle PASS ok github.com/ollama/ollama/app/lifecycle 0.003s github.com/ollama/ollama/app/store ? github.com/ollama/ollama/app/store [no test files] github.com/ollama/ollama/app/tray ? github.com/ollama/ollama/app/tray [no test files] github.com/ollama/ollama/app/tray/commontray ? github.com/ollama/ollama/app/tray/commontray [no test files] github.com/ollama/ollama/auth ? github.com/ollama/ollama/auth [no test files] github.com/ollama/ollama/cmd [?25l[?2026h[?25l[?25h[?2026l[?25hdeleted 'test-model' [?25l[?2026h[?25l[?25h[?2026l[?25hCouldn't find '/tmp/TestPushHandlersuccessful_push396597319/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmw7v9kTxid/wpn7YKGhM/Dj89RYuoyZXGUtB+b4VwU Couldn't find '/tmp/TestPushHandlernot_signed_in_push1595035061/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoJ+manDI+Cfr1XNLqT+zGdonH655td4UlBxwPBI81i Couldn't find '/tmp/TestPushHandlerunauthorized_push2910193961/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKGZw+g4x6U+DQ/pKyVk3TrI0nPr3coF/GFex3Jfqq+t Added image '/tmp/TestExtractFileDataRemovesQuotedFilepath88031332/001/img.jpg' PASS ok github.com/ollama/ollama/cmd 0.021s github.com/ollama/ollama/cmd [?25l[?2026h[?25l[?25h[?2026l[?25hdeleted 'test-model' [?25l[?2026h[?25l[?25h[?2026l[?25hCouldn't find '/tmp/TestPushHandlersuccessful_push3591393338/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGWcbLKLqcMYLGjIq10BGgb3uplkQQGZm+piOu/Lz4em Couldn't find '/tmp/TestPushHandlernot_signed_in_push454316622/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM2ctDhe4AyopBRzCxYAnSgEmKZZBmyu/3QESevmE/ph Couldn't find '/tmp/TestPushHandlerunauthorized_push390349019/001/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF+pDVUqrucHlvtYQA0KdLH71hWMlyuKDs6Uv+QGODXW Added image '/tmp/TestExtractFileDataRemovesQuotedFilepath319158017/001/img.jpg' PASS ok github.com/ollama/ollama/cmd 0.021s github.com/ollama/ollama/convert PASS ok github.com/ollama/ollama/convert 0.013s github.com/ollama/ollama/convert PASS ok github.com/ollama/ollama/convert 0.013s github.com/ollama/ollama/convert/sentencepiece ? github.com/ollama/ollama/convert/sentencepiece [no test files] github.com/ollama/ollama/discover 2025/10/04 05:43:10 INFO example scenario="#5554 Docker Ollama container inside the LXC" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:43:10 INFO example scenario="#5554 LXC direct output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:43:10 INFO example scenario="#5554 LXC docker container output" cpus="[{ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:29 EfficiencyCoreCount:0 ThreadCount:29}]" 2025/10/04 05:43:10 INFO example scenario="#5554 LXC docker output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:43:10 INFO example scenario="#7359 VMware multi-core core VM" cpus="[{ID:0 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:10 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:12 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:14 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:2 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:4 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:6 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:8 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1}]" 2025/10/04 05:43:10 INFO example scenario="#7287 HyperV 2 socket exposed to VM" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:43:10 INFO looking for compatible GPUs 2025/10/04 05:43:10 INFO no compatible GPUs were discovered PASS ok github.com/ollama/ollama/discover 0.006s github.com/ollama/ollama/discover 2025/10/04 05:43:11 INFO example scenario="#5554 LXC direct output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:43:11 INFO example scenario="#5554 LXC docker container output" cpus="[{ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:29 EfficiencyCoreCount:0 ThreadCount:29}]" 2025/10/04 05:43:11 INFO example scenario="#5554 LXC docker output" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:8 EfficiencyCoreCount:0 ThreadCount:8}]" 2025/10/04 05:43:11 INFO example scenario="#7359 VMware multi-core core VM" cpus="[{ID:0 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:10 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:12 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:14 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:2 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:4 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:6 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1} {ID:8 VendorID:GenuineIntel ModelName:Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz CoreCount:1 EfficiencyCoreCount:0 ThreadCount:1}]" 2025/10/04 05:43:11 INFO example scenario="#7287 HyperV 2 socket exposed to VM" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD Ryzen 3 4100 4-Core Processor CoreCount:2 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:43:11 INFO example scenario="#5554 Docker Ollama container inside the LXC" cpus="[{ID:0 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4} {ID:1 VendorID:AuthenticAMD ModelName:AMD EPYC 9754 128-Core Processor CoreCount:4 EfficiencyCoreCount:0 ThreadCount:4}]" 2025/10/04 05:43:11 INFO looking for compatible GPUs 2025/10/04 05:43:11 INFO no compatible GPUs were discovered PASS ok github.com/ollama/ollama/discover 0.006s github.com/ollama/ollama/envconfig 2025/10/04 05:43:11 WARN invalid port, using default port=66000 default=11434 2025/10/04 05:43:11 WARN invalid port, using default port=-1 default=11434 2025/10/04 05:43:11 WARN invalid environment variable, using default key=OLLAMA_UINT value=-1 default=11434 2025/10/04 05:43:11 WARN invalid environment variable, using default key=OLLAMA_UINT value=0o10 default=11434 2025/10/04 05:43:11 WARN invalid environment variable, using default key=OLLAMA_UINT value=0x10 default=11434 2025/10/04 05:43:11 WARN invalid environment variable, using default key=OLLAMA_UINT value=string default=11434 PASS ok github.com/ollama/ollama/envconfig 0.004s github.com/ollama/ollama/envconfig 2025/10/04 05:43:11 WARN invalid port, using default port=66000 default=11434 2025/10/04 05:43:11 WARN invalid port, using default port=-1 default=11434 2025/10/04 05:43:11 WARN invalid environment variable, using default key=OLLAMA_UINT value=-1 default=11434 2025/10/04 05:43:11 WARN invalid environment variable, using default key=OLLAMA_UINT value=0o10 default=11434 2025/10/04 05:43:11 WARN invalid environment variable, using default key=OLLAMA_UINT value=0x10 default=11434 2025/10/04 05:43:11 WARN invalid environment variable, using default key=OLLAMA_UINT value=string default=11434 PASS ok github.com/ollama/ollama/envconfig 0.004s github.com/ollama/ollama/format PASS ok github.com/ollama/ollama/format 0.002s github.com/ollama/ollama/format PASS ok github.com/ollama/ollama/format 0.002s github.com/ollama/ollama/fs ? github.com/ollama/ollama/fs [no test files] github.com/ollama/ollama/fs/ggml PASS ok github.com/ollama/ollama/fs/ggml 0.004s github.com/ollama/ollama/fs/ggml PASS ok github.com/ollama/ollama/fs/ggml 0.004s github.com/ollama/ollama/fs/gguf PASS ok github.com/ollama/ollama/fs/gguf 0.004s github.com/ollama/ollama/fs/gguf PASS ok github.com/ollama/ollama/fs/gguf 0.004s github.com/ollama/ollama/fs/util/bufioutil PASS ok github.com/ollama/ollama/fs/util/bufioutil 0.002s github.com/ollama/ollama/fs/util/bufioutil PASS ok github.com/ollama/ollama/fs/util/bufioutil 0.001s github.com/ollama/ollama/harmony event: {} event: {Header:{Role:user Channel: Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_weather}} event: {Content:{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|message|>{"sunny": true, "temperature": 20}} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:User asks weather in SF. We need location. Use get_current_weather with location "San Francisco, CA".} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_current_weather}} event: {Content:{"location":"San Francisco, CA"}<|call|>} PASS ok github.com/ollama/ollama/harmony 0.003s github.com/ollama/ollama/harmony event: {} event: {Header:{Role:user Channel: Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:What is 2 + 2?} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_weather}} event: {Content:{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|message|>{"sunny": true, "temperature": 20}} event: {} event: {} event: {Header:{Role:assistant Channel:analysis Recipient:}} event: {Content:User asks weather in SF. We need location. Use get_current_weather with location "San Francisco, CA".} event: {} event: {} event: {Header:{Role:assistant Channel:commentary Recipient:functions.get_current_weather}} event: {Content:{"location":"San Francisco, CA"}<|call|>} PASS ok github.com/ollama/ollama/harmony 0.003s github.com/ollama/ollama/kvcache PASS ok github.com/ollama/ollama/kvcache 0.002s github.com/ollama/ollama/kvcache PASS ok github.com/ollama/ollama/kvcache 0.002s github.com/ollama/ollama/llama PASS ok github.com/ollama/ollama/llama 0.003s github.com/ollama/ollama/llama PASS ok github.com/ollama/ollama/llama 0.004s github.com/ollama/ollama/llama/llama.cpp/common ? github.com/ollama/ollama/llama/llama.cpp/common [no test files] github.com/ollama/ollama/llama/llama.cpp/src ? github.com/ollama/ollama/llama/llama.cpp/src [no test files] github.com/ollama/ollama/llama/llama.cpp/tools/mtmd ? github.com/ollama/ollama/llama/llama.cpp/tools/mtmd [no test files] github.com/ollama/ollama/llm 2025/10/04 05:43:20 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:20 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:20 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:20 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:20 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:20 INFO aborting completion request due to client closing the connection PASS ok github.com/ollama/ollama/llm 0.005s github.com/ollama/ollama/llm 2025/10/04 05:43:21 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:21 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:21 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:21 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:21 INFO aborting completion request due to client closing the connection 2025/10/04 05:43:21 INFO aborting completion request due to client closing the connection PASS ok github.com/ollama/ollama/llm 0.005s github.com/ollama/ollama/logutil ? github.com/ollama/ollama/logutil [no test files] github.com/ollama/ollama/ml ? github.com/ollama/ollama/ml [no test files] github.com/ollama/ollama/ml/backend ? github.com/ollama/ollama/ml/backend [no test files] github.com/ollama/ollama/ml/backend/ggml ? github.com/ollama/ollama/ml/backend/ggml [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src ? github.com/ollama/ollama/ml/backend/ggml/ggml/src [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86 [no test files] github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile [no test files] github.com/ollama/ollama/ml/nn ? github.com/ollama/ollama/ml/nn [no test files] github.com/ollama/ollama/ml/nn/fast ? github.com/ollama/ollama/ml/nn/fast [no test files] github.com/ollama/ollama/ml/nn/pooling 2025/10/04 05:43:23 INFO looking for compatible GPUs 2025/10/04 05:43:23 INFO no compatible GPUs were discovered 2025/10/04 05:43:23 INFO architecture=test file_type=unknown name="" description="" num_tensors=1 num_key_values=3 2025/10/04 05:43:23 INFO system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) PASS ok github.com/ollama/ollama/ml/nn/pooling 0.009s github.com/ollama/ollama/ml/nn/pooling 2025/10/04 05:43:23 INFO looking for compatible GPUs 2025/10/04 05:43:23 INFO no compatible GPUs were discovered 2025/10/04 05:43:23 INFO architecture=test file_type=unknown name="" description="" num_tensors=1 num_key_values=3 2025/10/04 05:43:23 INFO system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) PASS ok github.com/ollama/ollama/ml/nn/pooling 0.008s github.com/ollama/ollama/ml/nn/rope ? github.com/ollama/ollama/ml/nn/rope [no test files] github.com/ollama/ollama/model time=2025-10-04T05:43:24.982Z level=DEBUG msg="adding bos token to prompt" id=1 time=2025-10-04T05:43:24.982Z level=DEBUG msg="adding eos token to prompt" id=2 PASS ok github.com/ollama/ollama/model 0.458s github.com/ollama/ollama/model time=2025-10-04T05:43:25.639Z level=DEBUG msg="adding bos token to prompt" id=1 time=2025-10-04T05:43:25.639Z level=DEBUG msg="adding eos token to prompt" id=2 PASS ok github.com/ollama/ollama/model 0.227s github.com/ollama/ollama/model/imageproc PASS ok github.com/ollama/ollama/model/imageproc 0.020s github.com/ollama/ollama/model/imageproc PASS ok github.com/ollama/ollama/model/imageproc 0.020s github.com/ollama/ollama/model/input ? github.com/ollama/ollama/model/input [no test files] github.com/ollama/ollama/model/models ? github.com/ollama/ollama/model/models [no test files] github.com/ollama/ollama/model/models/bert ? github.com/ollama/ollama/model/models/bert [no test files] github.com/ollama/ollama/model/models/deepseek2 ? github.com/ollama/ollama/model/models/deepseek2 [no test files] github.com/ollama/ollama/model/models/gemma2 ? github.com/ollama/ollama/model/models/gemma2 [no test files] github.com/ollama/ollama/model/models/gemma3 ? github.com/ollama/ollama/model/models/gemma3 [no test files] github.com/ollama/ollama/model/models/gemma3n ? github.com/ollama/ollama/model/models/gemma3n [no test files] github.com/ollama/ollama/model/models/gptoss ? github.com/ollama/ollama/model/models/gptoss [no test files] github.com/ollama/ollama/model/models/llama ? github.com/ollama/ollama/model/models/llama [no test files] github.com/ollama/ollama/model/models/llama4 PASS ok github.com/ollama/ollama/model/models/llama4 0.011s github.com/ollama/ollama/model/models/llama4 PASS ok github.com/ollama/ollama/model/models/llama4 0.012s github.com/ollama/ollama/model/models/mistral3 ? github.com/ollama/ollama/model/models/mistral3 [no test files] github.com/ollama/ollama/model/models/mllama PASS ok github.com/ollama/ollama/model/models/mllama 0.443s github.com/ollama/ollama/model/models/mllama PASS ok github.com/ollama/ollama/model/models/mllama 0.455s github.com/ollama/ollama/model/models/qwen2 ? github.com/ollama/ollama/model/models/qwen2 [no test files] github.com/ollama/ollama/model/models/qwen25vl ? github.com/ollama/ollama/model/models/qwen25vl [no test files] github.com/ollama/ollama/model/models/qwen3 ? github.com/ollama/ollama/model/models/qwen3 [no test files] github.com/ollama/ollama/model/parsers PASS ok github.com/ollama/ollama/model/parsers 0.004s github.com/ollama/ollama/model/parsers PASS ok github.com/ollama/ollama/model/parsers 0.004s github.com/ollama/ollama/model/renderers PASS ok github.com/ollama/ollama/model/renderers 0.003s github.com/ollama/ollama/model/renderers PASS ok github.com/ollama/ollama/model/renderers 0.003s github.com/ollama/ollama/openai PASS ok github.com/ollama/ollama/openai 0.009s github.com/ollama/ollama/openai PASS ok github.com/ollama/ollama/openai 0.010s github.com/ollama/ollama/parser PASS ok github.com/ollama/ollama/parser 0.005s github.com/ollama/ollama/parser PASS ok github.com/ollama/ollama/parser 0.006s github.com/ollama/ollama/progress ? github.com/ollama/ollama/progress [no test files] github.com/ollama/ollama/readline ? github.com/ollama/ollama/readline [no test files] github.com/ollama/ollama/runner ? github.com/ollama/ollama/runner [no test files] github.com/ollama/ollama/runner/common PASS ok github.com/ollama/ollama/runner/common 0.002s github.com/ollama/ollama/runner/common PASS ok github.com/ollama/ollama/runner/common 0.002s github.com/ollama/ollama/runner/llamarunner PASS ok github.com/ollama/ollama/runner/llamarunner 0.004s github.com/ollama/ollama/runner/llamarunner PASS ok github.com/ollama/ollama/runner/llamarunner 0.004s github.com/ollama/ollama/runner/ollamarunner PASS ok github.com/ollama/ollama/runner/ollamarunner 0.004s github.com/ollama/ollama/runner/ollamarunner PASS ok github.com/ollama/ollama/runner/ollamarunner 0.004s github.com/ollama/ollama/sample PASS ok github.com/ollama/ollama/sample 0.151s github.com/ollama/ollama/sample PASS ok github.com/ollama/ollama/sample 0.158s github.com/ollama/ollama/server time=2025-10-04T05:43:38.157Z level=INFO source=logging.go:32 msg="ollama app started" time=2025-10-04T05:43:38.158Z level=DEBUG source=convert.go:232 msg="vocabulary is smaller than expected, padding with dummy tokens" expect=32000 actual=1 time=2025-10-04T05:43:38.164Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=general.file_type type=uint32 time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=general.quantization_version type=uint32 time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=llama.vocab_size type=uint32 time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.pre type=string time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.164Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.191Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.192Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.192Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:43:38.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.192Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:43:38.192Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.192Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.192Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.192Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:43:38.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.192Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:43:38.192Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.193Z level=ERROR source=images.go:157 msg="unknown capability" capability=unknown time=2025-10-04T05:43:38.195Z level=WARN source=manifest.go:160 msg="bad manifest name" path=host/namespace/model/.hidden time=2025-10-04T05:43:38.195Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:43:38.195Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:43:38.195Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:43:38.196Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=4 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:38.196Z level=WARN source=quantization.go:145 msg="tensor cols 100 are not divisible by 32, required for Q8_0 - using fallback quantization F16" time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.196Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.196Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[512 2]" offset=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=output.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:43:38.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.196Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.196Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:43:38.196Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:43:38.196Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=12 shape="[512 2]" offset=0 time=2025-10-04T05:43:38.196Z level=DEBUG source=gguf.go:627 msg=output.weight kind=14 shape="[256 4]" offset=576 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape="[512 2]" offset=0 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=4096 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:38.197Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=blk.0.attn_v.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=14 shape="[512 2]" offset=0 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=864 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[32 16 2]" offset=0 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:38.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=8 shape="[32 16 2]" offset=0 time=2025-10-04T05:43:38.197Z level=DEBUG source=gguf.go:627 msg=output.weight kind=8 shape="[256 4]" offset=1088 time=2025-10-04T05:43:38.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.200Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.200Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.201Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.202Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.203Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.204Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.204Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.205Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:38.206Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.206Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.206Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.206Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.206Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.206Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:43:38.207Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.209Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.209Z level=DEBUG source=create.go:98 msg="create model from model name" from=bob resp = api.ShowResponse{License:"", Modelfile:"# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM test:latest\n\nFROM \nTEMPLATE {{ .Prompt }}\n", Parameters:"", Template:"{{ .Prompt }}", System:"", Renderer:"", Parser:"", Details:api.ModelDetails{ParentModel:"", Format:"", Family:"gptoss", Families:[]string{"gptoss"}, ParameterSize:"20.9B", QuantizationLevel:"MXFP4"}, Messages:[]api.Message(nil), RemoteModel:"bob", RemoteHost:"https://ollama.com:11434", ModelInfo:map[string]interface {}{"general.architecture":"gptoss", "gptoss.context_length":131072, "gptoss.embedding_length":2880}, ProjectorInfo:map[string]interface {}(nil), Tensors:[]api.Tensor(nil), Capabilities:[]model.Capability{"completion", "tools", "thinking"}, ModifiedAt:time.Date(2025, time.October, 4, 5, 43, 38, 210081113, time.UTC)} time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.210Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.211Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.211Z level=DEBUG source=gguf.go:578 msg=tokenizer.chat_template type=string time=2025-10-04T05:43:38.211Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.211Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.211Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.211Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.211Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.211Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.211Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.220Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.221Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.221Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.221Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.221Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.221Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.221Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.221Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.221Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.222Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-04T05:43:38.222Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-10-04T05:43:38.222Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so* time=2025-10-04T05:43:38.222Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build3026786075/b001/libcuda.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-10-04T05:43:38.223Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:43:38.223Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcudart.so* time=2025-10-04T05:43:38.223Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build3026786075/b001/libcudart.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcudart.so* /tmp/go-build3026786075/b001/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-10-04T05:43:38.223Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:43:38.223Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-10-04T05:43:38.223Z level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-04T05:43:38.223Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:43:38.223Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.223Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.225Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.225Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.227Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.227Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.227Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.228Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.228Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.228Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.230Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.230Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.230Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.232Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.232Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.232Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.233Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.233Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.233Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.235Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.235Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.235Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.237Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.237Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.237Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.239Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.239Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.239Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.240Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.240Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.240Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.242Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.242Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.242Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly4007285373/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.243Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.243Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.244Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.244Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.244Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.244Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.244Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.244Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.244Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.244Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.245Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.245Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.245Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.247Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.247Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.247Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.249Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.249Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.249Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.250Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.250Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.252Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.252Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.252Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.254Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.254Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.254Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.256Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.256Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.256Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.257Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.257Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.257Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.259Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.259Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.259Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.261Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.261Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.261Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly1152014131/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.262Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.262Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.262Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.264Z level=DEBUG source=manifest.go:53 msg="layer does not exist" digest=sha256:776957f9c9239232f060e29d642d8f5ef3bb931f485c27a13ae6385515fb425c time=2025-10-04T05:43:38.264Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.264Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.264Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.265Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.265Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.265Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.265Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.265Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.266Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.266Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:43:38.266Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.267Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.267Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.267Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:43:38.267Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:43:38.267Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.267Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.267Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.268Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.268Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.268Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3437390845/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.269Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.269Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.269Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3437390845/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.271Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:38.271Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.271Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.271Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.271Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.271Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3437390845/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.273Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.273Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.273Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3437390845/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.276Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.276Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.276Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3437390845/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.277Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.277Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.277Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3437390845/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.279Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.279Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.279Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat3437390845/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.311Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.311Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.311Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.311Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.311Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.311Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.311Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.311Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.311Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.311Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.311Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.311Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.311Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.312Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.312Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.312Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.312Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.312Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.312Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.313Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:43:38.313Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:43:38.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:43:38.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.314Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.315Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.315Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2485889169/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.316Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.316Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.316Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2485889169/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.318Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:38.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.318Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.318Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2485889169/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.321Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.321Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2485889169/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.322Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.322Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.322Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2485889169/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.324Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:38.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.324Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.325Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.325Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2485889169/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.326Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.326Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.326Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2485889169/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.329Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.329Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.329Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate2485889169/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.330Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.330Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.330Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.330Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.330Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.331Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.331Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.332Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag1506140360/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.333Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.333Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.333Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag1506140360/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.335Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.335Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.335Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag1506140360/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.337Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.337Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.337Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag1506140360/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.339Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.339Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.339Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag1506140360/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:38.381Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.381Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.381Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.381Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.382Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.382Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.382Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimecontent_streams_as_it_arrives1128038107/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.473Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.473Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.473Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.473Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.473Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.473Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.473Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.473Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.473Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.473Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.473Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.473Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.474Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.474Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.474Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimethinking_streams_separately_from_content1259994164/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.596Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.596Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.596Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.596Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.596Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.597Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.597Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.597Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimepartial_tags_buffer_until_complete354535406/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.749Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.749Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.749Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.749Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.749Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.750Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.751Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.751Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.751Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimesimple_assistant_after_analysis1455616619/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.781Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.781Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.781Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.781Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.781Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.782Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.782Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.783Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.784Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.784Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.784Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_parsed_and_returned_correctly2949867525/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.814Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.814Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.814Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.814Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.814Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.814Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.814Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.814Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.814Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.814Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.814Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.814Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.814Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.815Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.815Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.815Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.815Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.815Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.815Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.815Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.815Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.816Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.816Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.816Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_with_streaming_JSON_across_chunks1302422576/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.907Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.907Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.907Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.907Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.909Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.909Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.909Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingSimple2949951814/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.910Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.910Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.910Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.910Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.910Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.910Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.910Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.911Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.912Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingsimple_message_without_thinking2822911580/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.912Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.912Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.912Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.912Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.913Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.913Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.913Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.913Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.913Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.913Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.913Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.914Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.914Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingmessage_with_analysis_channel_for_thinking1083562308/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.914Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.914Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:38.915Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:38.915Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:38.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:38.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.916Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.3 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.3 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:38.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.916Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingstreaming_with_partial_tags_across_boundaries3211027057/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:38.916Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.916Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:43:38 | 200 | 20.418µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/04 - 05:43:38 | 200 | 51.061µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/04 - 05:43:38 | 200 | 75.131µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:43:38 | 200 | 103.444µs | 127.0.0.1 | GET "/api/tags" time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.924Z level=INFO source=images.go:518 msg="total blobs: 3" time=2025-10-04T05:43:38.925Z level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-04T05:43:38.925Z level=INFO source=server.go:164 msg=http status=200 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:38766 proto=HTTP/1.1 query="" time=2025-10-04T05:43:38.925Z level=WARN source=server.go:164 msg=http error="model not found" status=404 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:38766 proto=HTTP/1.1 query="" [GIN] 2025/10/04 - 05:43:38 | 200 | 142.353µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.926Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:43:38 | 200 | 1.776914ms | 127.0.0.1 | POST "/api/create" time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:43:38 | 200 | 324.358µs | 127.0.0.1 | POST "/api/copy" time=2025-10-04T05:43:38.928Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.929Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:43:38 | 200 | 426.398µs | 127.0.0.1 | POST "/api/show" time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:43:38 | 200 | 448.77µs | 127.0.0.1 | GET "/v1/models/show-model" [GIN] 2025/10/04 - 05:43:38 | 405 | 1.134µs | 127.0.0.1 | GET "/api/show" time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.931Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.932Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.934Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.934Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.934Z level=DEBUG source=gguf.go:578 msg=general.type type=string time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.block_count default=0 time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.vision.block_count default=0 time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.block_count default=0 time=2025-10-04T05:43:38.934Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.vision.block_count default=0 time=2025-10-04T05:43:38.935Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:38.935Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.935Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:38.935Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.935Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.935Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:43:38.935Z level=INFO source=sched.go:417 msg="NewLlamaServer failed" model=foo error="something failed to load model blah: this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull `" time=2025-10-04T05:43:38.935Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:43:38.935Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:38.935Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:38.936Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open dummy_model_path: no such file or directory" time=2025-10-04T05:43:38.936Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:43:38.936Z level=ERROR source=sched.go:476 msg="error loading llama server" error="wait failure" time=2025-10-04T05:43:38.936Z level=DEBUG source=sched.go:478 msg="triggering expiration for failed load" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=dummy_model_path runner.num_ctx=4096 time=2025-10-04T05:43:38.937Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:38.937Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.937Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.937Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.937Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.937Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.937Z level=INFO source=sched_test.go:179 msg=a time=2025-10-04T05:43:38.937Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.937Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.937Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSameModelSameRequest1750717220/002/2364774798 time=2025-10-04T05:43:38.938Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:38.938Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest1750717220/002/2364774798 runner.num_ctx=4096 time=2025-10-04T05:43:38.938Z level=INFO source=sched_test.go:196 msg=b time=2025-10-04T05:43:38.938Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSameModelSameRequest1750717220/002/2364774798 time=2025-10-04T05:43:38.938Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.938Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.938Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.938Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:38.938Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest1750717220/002/2364774798 runner.num_ctx=4096 time=2025-10-04T05:43:38.938Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.938Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.938Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.938Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.938Z level=INFO source=sched_test.go:223 msg=a time=2025-10-04T05:43:38.938Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.939Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.939Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 time=2025-10-04T05:43:38.939Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:38.939Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 time=2025-10-04T05:43:38.939Z level=INFO source=sched_test.go:241 msg=b time=2025-10-04T05:43:38.939Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 time=2025-10-04T05:43:38.939Z level=DEBUG source=sched.go:154 msg=reloading runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 time=2025-10-04T05:43:38.939Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:43:38.939Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 time=2025-10-04T05:43:38.940Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 time=2025-10-04T05:43:38.940Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="20 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel1618136694/002/1139715534 runner.num_ctx=4096 time=2025-10-04T05:43:38.940Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.940Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:38.940Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.940Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.940Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.941Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.941Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.942Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.942Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.942Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.942Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.943Z level=INFO source=sched_test.go:274 msg=a time=2025-10-04T05:43:38.943Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.943Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 time=2025-10-04T05:43:38.943Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:38.943Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 time=2025-10-04T05:43:38.943Z level=INFO source=sched_test.go:293 msg=b time=2025-10-04T05:43:38.943Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:43:38.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.943Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:43:38.943Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:43:38.943Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:43:38.943Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:43:38.943Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 time=2025-10-04T05:43:38.943Z level=INFO source=sched_test.go:311 msg=c time=2025-10-04T05:43:38.943Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.943Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=cpu available="24.2 GiB" time=2025-10-04T05:43:38.943Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=cpu total="29.8 GiB" available="19.6 GiB" time=2025-10-04T05:43:38.944Z level=INFO source=sched.go:470 msg="loaded runners" count=3 time=2025-10-04T05:43:38.944Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:43:38.944Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-4a runner.inference=cpu runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/006/815597731 runner.num_ctx=4096 time=2025-10-04T05:43:38.944Z level=INFO source=sched_test.go:329 msg=d time=2025-10-04T05:43:38.944Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:38.944Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:43:38.944Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:162 msg="max runners achieved, unloading one to make room" runner_count=3 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 runner.num_ctx=4096 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/002/3418073448 time=2025-10-04T05:43:38.946Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:43:38.946Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="3.7 GiB" time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:43:38.946Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 runner.num_ctx=4096 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/004/717978101 time=2025-10-04T05:43:38.952Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:43:38.952Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:43:38.952Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3c runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels1058257635/008/2954223660 runner.num_ctx=4096 time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:38.952Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:38.952Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.952Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.952Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.952Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.953Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.953Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.953Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.953Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:38.953Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:38.953Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.953Z level=INFO source=sched_test.go:367 msg=a time=2025-10-04T05:43:38.953Z level=INFO source=sched_test.go:370 msg=b time=2025-10-04T05:43:38.953Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:38.953Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:38.953Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGetRunner2623279816/002/1850598039 time=2025-10-04T05:43:38.953Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:38.953Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 runner.num_ctx=4096 time=2025-10-04T05:43:38.953Z level=INFO source=sched_test.go:394 msg=c time=2025-10-04T05:43:38.953Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open bad path: no such file or directory" time=2025-10-04T05:43:38.953Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:38.953Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 runner.num_ctx=4096 duration=2ms time=2025-10-04T05:43:38.953Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:38.956Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 runner.num_ctx=4096 time=2025-10-04T05:43:38.956Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 runner.num_ctx=4096 time=2025-10-04T05:43:38.956Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 runner.num_ctx=4096 time=2025-10-04T05:43:38.956Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 runner.num_ctx=4096 time=2025-10-04T05:43:38.956Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 runner.num_ctx=4096 time=2025-10-04T05:43:38.956Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 time=2025-10-04T05:43:38.956Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner2623279816/002/1850598039 time=2025-10-04T05:43:38.956Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:39.004Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:43:39.004Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:43:39.004Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:43:39.024Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:39.024Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:39.024Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:39.024Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:39.025Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:39.025Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:39.025Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:39.025Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:39.025Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestPrematureExpired1468815884/002/467911259 time=2025-10-04T05:43:39.025Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:39.025Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 time=2025-10-04T05:43:39.025Z level=INFO source=sched_test.go:481 msg="sending premature expired event now" time=2025-10-04T05:43:39.025Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 time=2025-10-04T05:43:39.025Z level=DEBUG source=sched.go:310 msg="expired event with positive ref count, retrying" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:43:39.030Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:39.030Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:43:39.030Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 runner.num_ctx=4096 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 time=2025-10-04T05:43:39.035Z level=DEBUG source=sched.go:332 msg="duplicate expired event, ignoring" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1468815884/002/467911259 time=2025-10-04T05:43:39.060Z level=ERROR source=sched.go:272 msg="finished request signal received after model unloaded" modelPath=/tmp/TestPrematureExpired1468815884/002/467911259 time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=1 library=a available="900 B" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:39.066Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=1 library=a total="1000 B" available="825 B" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=2 library=a available="1.9 KiB" time=2025-10-04T05:43:39.066Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=2 library=a total="2.0 KiB" available="1.8 KiB" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=a time=2025-10-04T05:43:39.066Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=b time=2025-10-04T05:43:39.066Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:39.066Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:39.066Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:39.067Z level=INFO source=sched_test.go:669 msg=scenario1a time=2025-10-04T05:43:39.067Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:39.067Z level=DEBUG source=sched.go:142 msg="pending request cancelled or timed out, skipping scheduling" time=2025-10-04T05:43:39.072Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:39.072Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" PASS ok github.com/ollama/ollama/server 0.926s github.com/ollama/ollama/server time=2025-10-04T05:43:40.145Z level=INFO source=logging.go:32 msg="ollama app started" time=2025-10-04T05:43:40.146Z level=DEBUG source=convert.go:232 msg="vocabulary is smaller than expected, padding with dummy tokens" expect=32000 actual=1 time=2025-10-04T05:43:40.152Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=general.file_type type=uint32 time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=general.quantization_version type=uint32 time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=llama.vocab_size type=uint32 time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.pre type=string time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.152Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.178Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.178Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:43:40.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=llama.vision.block_count type=uint32 time=2025-10-04T05:43:40.180Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:43:40.180Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.181Z level=ERROR source=images.go:157 msg="unknown capability" capability=unknown time=2025-10-04T05:43:40.183Z level=WARN source=manifest.go:160 msg="bad manifest name" path=host/namespace/model/.hidden time=2025-10-04T05:43:40.183Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:43:40.183Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:43:40.183Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=2 time=2025-10-04T05:43:40.183Z level=DEBUG source=prompt.go:63 msg="truncating input messages which exceed context length" truncated=4 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:40.184Z level=WARN source=quantization.go:145 msg="tensor cols 100 are not divisible by 32, required for Q8_0 - using fallback quantization F16" time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.expert_count default=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[512 2]" offset=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=output.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=12 shape="[512 2]" offset=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:627 msg=output.weight kind=14 shape="[256 4]" offset=576 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape="[512 2]" offset=0 time=2025-10-04T05:43:40.184Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=4096 time=2025-10-04T05:43:40.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:40.185Z level=DEBUG source=quantization.go:241 msg="tensor quantization adjusted for better quality" name=blk.0.attn_v.weight requested=Q4_K quantization=Q6_K time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=14 shape="[512 2]" offset=0 time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[512] offset=864 time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=1 shape="[32 16 2]" offset=0 time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:627 msg=output.weight kind=1 shape="[256 4]" offset=2048 time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=foo.expert_count default=0 time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:578 msg=general.file_type type=ggml.FileType time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:578 msg=general.parameter_count type=uint64 time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=8 shape="[32 16 2]" offset=0 time=2025-10-04T05:43:40.185Z level=DEBUG source=gguf.go:627 msg=output.weight kind=8 shape="[256 4]" offset=1088 time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.185Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.186Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.187Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:40.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.187Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.188Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.189Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.190Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.191Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.191Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.192Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:40.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.192Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:43:40.194Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.194Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown removing old messages time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.195Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.196Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.197Z level=DEBUG source=create.go:98 msg="create model from model name" from=bob resp = api.ShowResponse{License:"", Modelfile:"# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM test:latest\n\nFROM \nTEMPLATE {{ .Prompt }}\n", Parameters:"", Template:"{{ .Prompt }}", System:"", Renderer:"", Parser:"", Details:api.ModelDetails{ParentModel:"", Format:"", Family:"gptoss", Families:[]string{"gptoss"}, ParameterSize:"20.9B", QuantizationLevel:"MXFP4"}, Messages:[]api.Message(nil), RemoteModel:"bob", RemoteHost:"https://ollama.com:11434", ModelInfo:map[string]interface {}{"general.architecture":"gptoss", "gptoss.context_length":131072, "gptoss.embedding_length":2880}, ProjectorInfo:map[string]interface {}(nil), Tensors:[]api.Tensor(nil), Capabilities:[]model.Capability{"completion", "tools", "thinking"}, ModifiedAt:time.Date(2025, time.October, 4, 5, 43, 40, 197551492, time.UTC)} time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.198Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.199Z level=DEBUG source=gguf.go:578 msg=tokenizer.chat_template type=string time=2025-10-04T05:43:40.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.199Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.206Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.206Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.206Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.207Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.208Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.208Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.208Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.210Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-04T05:43:40.210Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-10-04T05:43:40.210Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so* time=2025-10-04T05:43:40.210Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build1918933372/b001/libcuda.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-10-04T05:43:40.210Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:43:40.210Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcudart.so* time=2025-10-04T05:43:40.210Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/tmp/go-build1918933372/b001/libcudart.so* /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/_build/src/github.com/ollama/ollama/server/libcudart.so* /tmp/go-build1918933372/b001/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-10-04T05:43:40.211Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-04T05:43:40.211Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-10-04T05:43:40.211Z level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-04T05:43:40.211Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:43:40.211Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.211Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.212Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.212Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.212Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.214Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.214Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.214Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.215Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.215Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.215Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.217Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.217Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.217Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.219Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.219Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.219Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.221Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.221Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.221Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.223Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.223Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.223Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.224Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.224Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.224Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.226Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.226Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.226Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.228Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.228Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.228Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.230Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.230Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.230Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateDebugRenderOnly248751400/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.231Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.231Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.231Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.231Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.231Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.232Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.232Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.232Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.232Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.232Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.232Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.232Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.233Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.234Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.234Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.234Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.236Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.237Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.237Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.238Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.238Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.238Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.240Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.240Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.240Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.241Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.241Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.241Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.243Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.243Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.243Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.245Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.245Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.245Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.247Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.247Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.247Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.248Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.249Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.249Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatDebugRenderOnly3509973244/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.250Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.250Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.250Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.252Z level=DEBUG source=manifest.go:53 msg="layer does not exist" digest=sha256:776957f9c9239232f060e29d642d8f5ef3bb931f485c27a13ae6385515fb425c time=2025-10-04T05:43:40.252Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.252Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.252Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.252Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.252Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.252Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.252Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.252Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.254Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.254Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:43:40.254Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.254Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.254Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.254Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:43:40.254Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:43:40.254Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.254Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.254Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.255Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.255Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.255Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1861696781/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.257Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.257Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.257Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1861696781/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.258Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:40.258Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.258Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.259Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.259Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.259Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1861696781/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.261Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.261Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.261Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1861696781/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.263Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.263Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.263Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1861696781/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.265Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.265Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.265Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1861696781/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.266Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.266Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.266Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerateChat1861696781/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.298Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.298Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.299Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.299Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.299Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.300Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.300Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.300Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.300Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.300Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.300Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.300Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.300Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.300Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.300Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.300Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.300Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.300Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.300Z level=DEBUG source=gguf.go:578 msg=bert.pooling_type type=uint32 time=2025-10-04T05:43:40.300Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.301Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.301Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.301Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.block_count default=0 time=2025-10-04T05:43:40.301Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=bert.vision.block_count default=0 time=2025-10-04T05:43:40.301Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.301Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.301Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.302Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.302Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.302Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1011573002/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.304Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.304Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.304Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1011573002/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.305Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:40.305Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.305Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.307Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.307Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.307Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1011573002/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.308Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.308Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.308Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1011573002/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.310Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.310Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.310Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1011573002/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.312Z level=DEBUG source=create.go:98 msg="create model from model name" from=test time=2025-10-04T05:43:40.312Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.312Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.313Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.313Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.313Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1011573002/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.315Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.315Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.315Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1011573002/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.316Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.316Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.316Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGenerate1011573002/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.318Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.318Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.318Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.318Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.318Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.319Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.319Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.319Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.319Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.319Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2236502089/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.321Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.321Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.321Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2236502089/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.323Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.323Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.323Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2236502089/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.325Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.325Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.325Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2236502089/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.327Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.327Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.327Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatWithPromptEndingInThinkTag2236502089/001/blobs/sha256-f7d501e201449010dc170a8176a78095031ec1826728428777709515190c4d32 time=2025-10-04T05:43:40.369Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.369Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.369Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.369Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.369Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.370Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.370Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.370Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimecontent_streams_as_it_arrives3547020901/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.462Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.462Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.462Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.462Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.462Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.463Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.463Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.463Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.463Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.463Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.463Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.463Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.463Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.463Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.464Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimethinking_streams_separately_from_content3561544135/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.585Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.585Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.585Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.585Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.585Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.586Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.586Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.586Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimepartial_tags_buffer_until_complete4004438849/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.738Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.738Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.738Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.738Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.738Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.739Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.739Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.739Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.739Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.739Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.739Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.739Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.740Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.740Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.740Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimesimple_assistant_after_analysis1782791986/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.771Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.771Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.771Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.771Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.771Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.772Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.772Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.772Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_parsed_and_returned_correctly1887471412/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.803Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.803Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.803Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.803Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.803Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.804Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.806Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.806Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.806Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingRealtimetool_call_with_streaming_JSON_across_chunks3191881151/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.896Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.896Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.897Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.897Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.897Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.897Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.897Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.897Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.897Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.897Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.897Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.897Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.898Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.898Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.898Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingSimple2731363099/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.898Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.898Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.898Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.898Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.898Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.898Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.898Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.898Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.899Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.899Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.900Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.900Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.900Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingsimple_message_without_thinking1508456388/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.900Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.900Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.900Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.900Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.900Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.901Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.901Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.901Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.901Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.901Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.901Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.901Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.901Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.901Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.901Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.901Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.901Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.901Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.901Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.901Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.901Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.902Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.903Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.903Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingmessage_with_analysis_channel_for_thinking2991253478/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.903Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.903Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.903Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_k.weight kind=0 shape=[1] offset=0 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_norm.weight kind=0 shape=[1] offset=32 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_output.weight kind=0 shape=[1] offset=64 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_q.weight kind=0 shape=[1] offset=96 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=blk.0.attn_v.weight kind=0 shape=[1] offset=128 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_down.weight kind=0 shape=[1] offset=160 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_gate.weight kind=0 shape=[1] offset=192 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_norm.weight kind=0 shape=[1] offset=224 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=blk.0.ffn_up.weight kind=0 shape=[1] offset=256 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape=[1] offset=288 time=2025-10-04T05:43:40.903Z level=DEBUG source=gguf.go:627 msg=token_embd.weight kind=0 shape=[1] offset=320 time=2025-10-04T05:43:40.903Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.904Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.904Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.904Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.block_count default=0 time=2025-10-04T05:43:40.904Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-10-04T05:43:40.904Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.904Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.904Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.904Z level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="7.6 GiB" before.free="1.4 GiB" before.free_swap="139.4 GiB" now.total="7.6 GiB" now.free="1.4 GiB" now.free_swap="139.4 GiB" time=2025-10-04T05:43:40.904Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.904Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestChatHarmonyParserStreamingstreaming_with_partial_tags_across_boundaries4037071252/001/blobs/sha256-e045afb47b5c1be387efdd93037204b4f730013ab4b2ad5766222a042d7790f5 time=2025-10-04T05:43:40.905Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.905Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.905Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.906Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.907Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.908Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:43:40 | 200 | 29.693µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/04 - 05:43:40 | 200 | 57.453µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/04 - 05:43:40 | 200 | 94.858µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.911Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:43:40 | 200 | 158.238µs | 127.0.0.1 | GET "/api/tags" time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.912Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.912Z level=INFO source=images.go:518 msg="total blobs: 3" time=2025-10-04T05:43:40.913Z level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-04T05:43:40.913Z level=INFO source=server.go:164 msg=http status=200 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:53738 proto=HTTP/1.1 query="" time=2025-10-04T05:43:40.913Z level=WARN source=server.go:164 msg=http error="model not found" status=404 method=DELETE path=/api/delete content-length=-1 remote=127.0.0.1:53738 proto=HTTP/1.1 query="" [GIN] 2025/10/04 - 05:43:40 | 200 | 137.81µs | 127.0.0.1 | GET "/v1/models" time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:43:40 | 200 | 360.583µs | 127.0.0.1 | POST "/api/create" time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.914Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown [GIN] 2025/10/04 - 05:43:40 | 200 | 281.857µs | 127.0.0.1 | POST "/api/copy" time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.915Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:43:40 | 200 | 498.712µs | 127.0.0.1 | POST "/api/show" time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.916Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.917Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/04 - 05:43:40 | 200 | 488.131µs | 127.0.0.1 | GET "/v1/models/show-model" [GIN] 2025/10/04 - 05:43:40 | 405 | 663ns | 127.0.0.1 | GET "/api/show" time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.918Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.block_count default=0 time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=unknown.vision.block_count default=0 time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.919Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.architecture default=unknown time=2025-10-04T05:43:40.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.920Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.920Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.920Z level=DEBUG source=gguf.go:578 msg=general.type type=string time=2025-10-04T05:43:40.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.block_count default=0 time=2025-10-04T05:43:40.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=test.vision.block_count default=0 time=2025-10-04T05:43:40.920Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.type default=unknown time=2025-10-04T05:43:40.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.block_count default=0 time=2025-10-04T05:43:40.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=clip.vision.block_count default=0 time=2025-10-04T05:43:40.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.chat_template default="" time=2025-10-04T05:43:40.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-04T05:43:40.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.921Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:43:40.921Z level=INFO source=sched.go:417 msg="NewLlamaServer failed" model=foo error="something failed to load model blah: this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull `" time=2025-10-04T05:43:40.921Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:43:40.921Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:40.921Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:40.921Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open dummy_model_path: no such file or directory" time=2025-10-04T05:43:40.921Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:43:40.922Z level=ERROR source=sched.go:476 msg="error loading llama server" error="wait failure" time=2025-10-04T05:43:40.922Z level=DEBUG source=sched.go:478 msg="triggering expiration for failed load" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=dummy_model_path runner.num_ctx=4096 time=2025-10-04T05:43:40.923Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:40.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.923Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.924Z level=INFO source=sched_test.go:179 msg=a time=2025-10-04T05:43:40.924Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.924Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSameModelSameRequest2423151776/002/3404763895 time=2025-10-04T05:43:40.924Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:40.924Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest2423151776/002/3404763895 runner.num_ctx=4096 time=2025-10-04T05:43:40.924Z level=INFO source=sched_test.go:196 msg=b time=2025-10-04T05:43:40.924Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSameModelSameRequest2423151776/002/3404763895 time=2025-10-04T05:43:40.924Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.924Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.924Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSameModelSameRequest2423151776/002/3404763895 runner.num_ctx=4096 time=2025-10-04T05:43:40.924Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:40.924Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.924Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.925Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.925Z level=INFO source=sched_test.go:223 msg=a time=2025-10-04T05:43:40.925Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.925Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.925Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 time=2025-10-04T05:43:40.925Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:40.925Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 time=2025-10-04T05:43:40.925Z level=INFO source=sched_test.go:241 msg=b time=2025-10-04T05:43:40.925Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 time=2025-10-04T05:43:40.925Z level=DEBUG source=sched.go:154 msg=reloading runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 time=2025-10-04T05:43:40.925Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:43:40.925Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 time=2025-10-04T05:43:40.926Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 time=2025-10-04T05:43:40.926Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1 runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="20 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsSimpleReloadSameModel797865770/002/846064717 runner.num_ctx=4096 time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.926Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.927Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=INFO source=sched_test.go:274 msg=a time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 time=2025-10-04T05:43:40.927Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 time=2025-10-04T05:43:40.927Z level=INFO source=sched_test.go:293 msg=b time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:43:40.927Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:43:40.927Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 time=2025-10-04T05:43:40.927Z level=INFO source=sched_test.go:311 msg=c time=2025-10-04T05:43:40.927Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=cpu available="24.2 GiB" time=2025-10-04T05:43:40.927Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=cpu total="29.8 GiB" available="19.6 GiB" time=2025-10-04T05:43:40.927Z level=INFO source=sched.go:470 msg="loaded runners" count=3 time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-4a runner.inference=cpu runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/006/2870742563 runner.num_ctx=4096 time=2025-10-04T05:43:40.927Z level=INFO source=sched_test.go:329 msg=d time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:43:40.927Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:162 msg="max runners achieved, unloading one to make room" runner_count=3 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 runner.num_ctx=4096 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="953.7 MiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/002/3663354988 time=2025-10-04T05:43:40.930Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:43:40.930Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="3.7 GiB" time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:232 msg="resetting model to expire immediately to make room" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:43:40.930Z level=DEBUG source=sched.go:243 msg="waiting for pending requests to complete and unload to occur" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-3b runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 runner.num_ctx=4096 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:249 msg="unload completed" runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/004/149220103 time=2025-10-04T05:43:40.936Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu="" library=metal available="11.2 GiB" time=2025-10-04T05:43:40.936Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu="" library=metal total="22.4 GiB" available="11.2 GiB" time=2025-10-04T05:43:40.936Z level=INFO source=sched.go:470 msg="loaded runners" count=2 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:218 msg="new model fits with existing models, loading" time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-3c runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="9.3 GiB" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestRequestsMultipleLoadedModels2284676142/008/225249048 runner.num_ctx=4096 time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:40.936Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.936Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.936Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.936Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.936Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.937Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.937Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:40.937Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:40.938Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:40.938Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.938Z level=INFO source=sched_test.go:367 msg=a time=2025-10-04T05:43:40.938Z level=INFO source=sched_test.go:370 msg=b time=2025-10-04T05:43:40.938Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:40.938Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:40.938Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestGetRunner376410361/002/3926288305 time=2025-10-04T05:43:40.938Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:40.938Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 runner.num_ctx=4096 time=2025-10-04T05:43:40.938Z level=INFO source=sched_test.go:394 msg=c time=2025-10-04T05:43:40.938Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open bad path: no such file or directory" time=2025-10-04T05:43:40.938Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:40.938Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 runner.num_ctx=4096 duration=2ms time=2025-10-04T05:43:40.938Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:40.940Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 runner.num_ctx=4096 time=2025-10-04T05:43:40.940Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 runner.num_ctx=4096 time=2025-10-04T05:43:40.940Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 runner.num_ctx=4096 time=2025-10-04T05:43:40.940Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 runner.num_ctx=4096 time=2025-10-04T05:43:40.940Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 runner.num_ctx=4096 time=2025-10-04T05:43:40.940Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 time=2025-10-04T05:43:40.940Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestGetRunner376410361/002/3926288305 time=2025-10-04T05:43:40.940Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:40.989Z level=ERROR source=images.go:92 msg="couldn't open model file" error="open foo: no such file or directory" time=2025-10-04T05:43:40.989Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:279 msg="runner with zero duration has gone idle, expiring to unload" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name="" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo runner.num_ctx=4096 time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:43:40.989Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=foo time=2025-10-04T05:43:41.009Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:41.009Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:41.009Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:41.009Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:41.009Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:41.009Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:41.009Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:41.009Z level=DEBUG source=sched.go:208 msg="loading first model" model=/tmp/TestPrematureExpired1020813175/002/1665592415 time=2025-10-04T05:43:41.009Z level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-04T05:43:41.009Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 time=2025-10-04T05:43:41.009Z level=INFO source=sched_test.go:481 msg="sending premature expired event now" time=2025-10-04T05:43:41.009Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 time=2025-10-04T05:43:41.009Z level=DEBUG source=sched.go:310 msg="expired event with positive ref count, retrying" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 refCount=1 time=2025-10-04T05:43:41.014Z level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-10-04T05:43:41.014Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 duration=5ms time=2025-10-04T05:43:41.014Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 refCount=0 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:288 msg="timer expired, expiring to unload" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:345 msg="starting background wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:630 msg="no need to wait for VRAM recovery" runner.name=ollama-model-1a runner.inference=metal runner.devices=1 runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 runner.num_ctx=4096 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:350 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:353 msg="sending an unloaded event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:255 msg="ignoring unload event with no pending requests" time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:307 msg="runner expired event received" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:322 msg="got lock to unload expired event" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 time=2025-10-04T05:43:41.020Z level=DEBUG source=sched.go:332 msg="duplicate expired event, ignoring" runner.size="0 B" runner.vram="10 B" runner.parallel=1 runner.pid=-1 runner.model=/tmp/TestPrematureExpired1020813175/002/1665592415 time=2025-10-04T05:43:41.046Z level=ERROR source=sched.go:272 msg="finished request signal received after model unloaded" modelPath=/tmp/TestPrematureExpired1020813175/002/1665592415 time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:265 msg="shutting down scheduler completed loop" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:135 msg="shutting down scheduler pending loop" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=1 library=a available="900 B" time=2025-10-04T05:43:41.051Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=1 library=a total="1000 B" available="825 B" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:526 msg="gpu reported" gpu=2 library=a available="1.9 KiB" time=2025-10-04T05:43:41.051Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=2 library=a total="2.0 KiB" available="1.8 KiB" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:742 msg="found an idle runner to unload" runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:747 msg="no idle runners, picking the shortest duration" runner_count=2 runner.size="0 B" runner.vram="0 B" runner.parallel=1 runner.pid=0 runner.model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model="" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=a time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:763 msg="shutting down runner" model=b time=2025-10-04T05:43:41.051Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count type=uint32 time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:578 msg=llama.attention.head_count_kv type=uint32 time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:578 msg=llama.block_count type=uint32 time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:578 msg=llama.context_length type=uint32 time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:578 msg=llama.embedding_length type=uint32 time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.scores type=[]float32 time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.token_type type=[]int32 time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.tokens type=[]string time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:627 msg=blk.0.attn.weight kind=0 shape="[1 1 1 1]" offset=0 time=2025-10-04T05:43:41.051Z level=DEBUG source=gguf.go:627 msg=output.weight kind=0 shape="[1 1 1 1]" offset=32 time=2025-10-04T05:43:41.051Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-04T05:43:41.051Z level=INFO source=sched_test.go:669 msg=scenario1a time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-04T05:43:41.051Z level=DEBUG source=sched.go:142 msg="pending request cancelled or timed out, skipping scheduling" PASS ok github.com/ollama/ollama/server 0.924s github.com/ollama/ollama/server/internal/cache/blob PASS ok github.com/ollama/ollama/server/internal/cache/blob 0.004s github.com/ollama/ollama/server/internal/cache/blob PASS ok github.com/ollama/ollama/server/internal/cache/blob 0.004s github.com/ollama/ollama/server/internal/client/ollama 2025/10/04 05:43:42 http: TLS handshake error from 127.0.0.1:43858: remote error: tls: bad certificate PASS ok github.com/ollama/ollama/server/internal/client/ollama 0.148s github.com/ollama/ollama/server/internal/client/ollama 2025/10/04 05:43:42 http: TLS handshake error from 127.0.0.1:40872: remote error: tls: bad certificate PASS ok github.com/ollama/ollama/server/internal/client/ollama 0.149s github.com/ollama/ollama/server/internal/internal/backoff ? github.com/ollama/ollama/server/internal/internal/backoff [no test files] github.com/ollama/ollama/server/internal/internal/names PASS ok github.com/ollama/ollama/server/internal/internal/names 0.002s github.com/ollama/ollama/server/internal/internal/names PASS ok github.com/ollama/ollama/server/internal/internal/names 0.002s github.com/ollama/ollama/server/internal/internal/stringsx PASS ok github.com/ollama/ollama/server/internal/internal/stringsx 0.004s github.com/ollama/ollama/server/internal/internal/stringsx PASS ok github.com/ollama/ollama/server/internal/internal/stringsx 0.003s github.com/ollama/ollama/server/internal/internal/syncs ? github.com/ollama/ollama/server/internal/internal/syncs [no test files] github.com/ollama/ollama/server/internal/manifest ? github.com/ollama/ollama/server/internal/manifest [no test files] github.com/ollama/ollama/server/internal/registry 2025/10/04 05:43:43 http: TLS handshake error from 127.0.0.1:33234: write tcp 127.0.0.1:42047->127.0.0.1:33234: use of closed network connection PASS ok github.com/ollama/ollama/server/internal/registry 0.008s github.com/ollama/ollama/server/internal/registry 2025/10/04 05:43:43 http: TLS handshake error from 127.0.0.1:42044: write tcp 127.0.0.1:42373->127.0.0.1:42044: use of closed network connection PASS ok github.com/ollama/ollama/server/internal/registry 0.008s github.com/ollama/ollama/server/internal/testutil ? github.com/ollama/ollama/server/internal/testutil [no test files] github.com/ollama/ollama/template PASS ok github.com/ollama/ollama/template 0.544s github.com/ollama/ollama/template PASS ok github.com/ollama/ollama/template 0.532s github.com/ollama/ollama/thinking PASS ok github.com/ollama/ollama/thinking 0.002s github.com/ollama/ollama/thinking PASS ok github.com/ollama/ollama/thinking 0.002s github.com/ollama/ollama/tools PASS ok github.com/ollama/ollama/tools 0.005s github.com/ollama/ollama/tools PASS ok github.com/ollama/ollama/tools 0.005s github.com/ollama/ollama/types/errtypes ? github.com/ollama/ollama/types/errtypes [no test files] github.com/ollama/ollama/types/model PASS ok github.com/ollama/ollama/types/model 0.003s github.com/ollama/ollama/types/model PASS ok github.com/ollama/ollama/types/model 0.003s github.com/ollama/ollama/types/syncmap ? github.com/ollama/ollama/types/syncmap [no test files] github.com/ollama/ollama/version ? github.com/ollama/ollama/version [no test files] + RPM_EC=0 ++ jobs -p + exit 0 Processing files: ollama-0.12.3-1.fc42.x86_64 Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.dxcglu + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + DOCDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export DOCDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/docs /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/CONTRIBUTING.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/README.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/SECURITY.md /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/doc/ollama + RPM_EC=0 ++ jobs -p + exit 0 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.p5DGSb + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/vendor/modules.txt /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama + RPM_EC=0 ++ jobs -p + exit 0 warning: File listed twice: /usr/share/licenses/ollama Provides: bundled(golang(github.com/agnivade/levenshtein)) = 1.1.1 bundled(golang(github.com/apache/arrow/go/arrow)) = bc21918 bundled(golang(github.com/bytedance/sonic)) = 1.11.6 bundled(golang(github.com/bytedance/sonic/loader)) = 0.1.1 bundled(golang(github.com/chewxy/hm)) = 1.0.0 bundled(golang(github.com/chewxy/math32)) = 1.11.0 bundled(golang(github.com/cloudwego/base64x)) = 0.1.4 bundled(golang(github.com/cloudwego/iasm)) = 0.2.0 bundled(golang(github.com/containerd/console)) = 1.0.3 bundled(golang(github.com/d4l3k/go-bfloat16)) = 690c3bd bundled(golang(github.com/davecgh/go-spew)) = 1.1.1 bundled(golang(github.com/dlclark/regexp2)) = 1.11.4 bundled(golang(github.com/emirpasic/gods/v2)) = 2.0.0_alpha bundled(golang(github.com/gabriel-vasile/mimetype)) = 1.4.3 bundled(golang(github.com/gin-contrib/cors)) = 1.7.2 bundled(golang(github.com/gin-contrib/sse)) = 0.1.0 bundled(golang(github.com/gin-gonic/gin)) = 1.10.0 bundled(golang(github.com/go-playground/locales)) = 0.14.1 bundled(golang(github.com/go-playground/universal-translator)) = 0.18.1 bundled(golang(github.com/go-playground/validator/v10)) = 10.20.0 bundled(golang(github.com/goccy/go-json)) = 0.10.2 bundled(golang(github.com/gogo/protobuf)) = 1.3.2 bundled(golang(github.com/golang/protobuf)) = 1.5.4 bundled(golang(github.com/google/flatbuffers)) = 24.3.25+incompatible bundled(golang(github.com/google/go-cmp)) = 0.7.0 bundled(golang(github.com/google/uuid)) = 1.6.0 bundled(golang(github.com/inconshreveable/mousetrap)) = 1.1.0 bundled(golang(github.com/json-iterator/go)) = 1.1.12 bundled(golang(github.com/klauspost/cpuid/v2)) = 2.2.7 bundled(golang(github.com/kr/text)) = 0.2.0 bundled(golang(github.com/leodido/go-urn)) = 1.4.0 bundled(golang(github.com/mattn/go-isatty)) = 0.0.20 bundled(golang(github.com/mattn/go-runewidth)) = 0.0.14 bundled(golang(github.com/modern-go/concurrent)) = bacd9c7 bundled(golang(github.com/modern-go/reflect2)) = 1.0.2 bundled(golang(github.com/nlpodyssey/gopickle)) = 0.3.0 bundled(golang(github.com/olekukonko/tablewriter)) = 0.0.5 bundled(golang(github.com/pdevine/tensor)) = f88f456 bundled(golang(github.com/pelletier/go-toml/v2)) = 2.2.2 bundled(golang(github.com/pkg/errors)) = 0.9.1 bundled(golang(github.com/pmezard/go-difflib)) = 1.0.0 bundled(golang(github.com/rivo/uniseg)) = 0.2.0 bundled(golang(github.com/spf13/cobra)) = 1.7.0 bundled(golang(github.com/spf13/pflag)) = 1.0.5 bundled(golang(github.com/stretchr/testify)) = 1.9.0 bundled(golang(github.com/twitchyliquid64/golang-asm)) = 0.15.1 bundled(golang(github.com/ugorji/go/codec)) = 1.2.12 bundled(golang(github.com/x448/float16)) = 0.8.4 bundled(golang(github.com/xtgo/set)) = 1.0.0 bundled(golang(go4.org/unsafe/assume-no-moving-gc)) = b99613f bundled(golang(golang.org/x/arch)) = 0.8.0 bundled(golang(golang.org/x/crypto)) = 0.36.0 bundled(golang(golang.org/x/exp)) = aa4b98e bundled(golang(golang.org/x/image)) = 0.22.0 bundled(golang(golang.org/x/net)) = 0.38.0 bundled(golang(golang.org/x/sync)) = 0.12.0 bundled(golang(golang.org/x/sys)) = 0.31.0 bundled(golang(golang.org/x/term)) = 0.30.0 bundled(golang(golang.org/x/text)) = 0.23.0 bundled(golang(golang.org/x/tools)) = 0.30.0 bundled(golang(golang.org/x/xerrors)) = 5ec99f8 bundled(golang(gonum.org/v1/gonum)) = 0.15.0 bundled(golang(google.golang.org/protobuf)) = 1.34.1 bundled(golang(gopkg.in/yaml.v3)) = 3.0.1 bundled(golang(gorgonia.org/vecf32)) = 0.9.0 bundled(golang(gorgonia.org/vecf64)) = 0.9.0 bundled(llama-cpp) = b6121 config(ollama) = 0.12.3-1.fc42 group(ollama) group(ollama) = ZyBvbGxhbWEgLSAt ollama = 0.12.3-1.fc42 ollama(x86-64) = 0.12.3-1.fc42 user(ollama) = dSBvbGxhbWEgLSAiT2xsYW1hIiAvdmFyL2xpYi9vbGxhbWEgLQAA Requires(interp): /bin/sh /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(pre): /bin/sh shadow-utils Requires(post): /bin/sh Requires(preun): /bin/sh Requires(postun): /bin/sh Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libresolv.so.2()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.11)(64bit) libstdc++.so.6(CXXABI_1.3.13)(64bit) libstdc++.so.6(CXXABI_1.3.15)(64bit) libstdc++.so.6(CXXABI_1.3.2)(64bit) libstdc++.so.6(CXXABI_1.3.3)(64bit) libstdc++.so.6(CXXABI_1.3.5)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.14)(64bit) libstdc++.so.6(GLIBCXX_3.4.15)(64bit) libstdc++.so.6(GLIBCXX_3.4.17)(64bit) libstdc++.so.6(GLIBCXX_3.4.18)(64bit) libstdc++.so.6(GLIBCXX_3.4.19)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) libstdc++.so.6(GLIBCXX_3.4.22)(64bit) libstdc++.so.6(GLIBCXX_3.4.25)(64bit) libstdc++.so.6(GLIBCXX_3.4.26)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) libstdc++.so.6(GLIBCXX_3.4.32)(64bit) libstdc++.so.6(GLIBCXX_3.4.9)(64bit) rtld(GNU_HASH) Recommends: group(ollama) ollama-ggml user(ollama) Processing files: ollama-ggml-0.12.3-1.fc42.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.LfN7qN + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml = 0.12.3-1.fc42 ollama-ggml(x86-64) = 0.12.3-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) rtld(GNU_HASH) Processing files: ollama-ggml-cpu-0.12.3-1.fc42.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.NmdpYc + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-cpu + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml-cpu = 0.12.3-1.fc42 ollama-ggml-cpu(x86-64) = 0.12.3-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) rtld(GNU_HASH) Supplements: ollama-ggml(x86-64) Processing files: ollama-ggml-rocm-0.12.3-1.fc42.x86_64 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.ryolYm + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + cd ollama-0.12.3 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-rocm + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-rocm + cp -pr /builddir/build/BUILD/ollama-0.12.3-build/ollama-0.12.3/ml/backend/ggml/ggml/LICENSE /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT/usr/share/licenses/ollama-ggml-rocm + RPM_EC=0 ++ jobs -p + exit 0 Provides: bundled(llama-cpp) = b6121 ollama-ggml-rocm = 0.12.3-1.fc42 ollama-ggml-rocm(x86-64) = 0.12.3-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libamdhip64.so.6()(64bit) libamdhip64.so.6(hip_4.2)(64bit) libamdhip64.so.6(hip_6.0)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libhipblas.so.2()(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) librocblas.so.4()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) Supplements: if ollama-ggml(x86-64) rocm-hip(x86-64) Processing files: ollama-debugsource-0.12.3-1.fc42.x86_64 Provides: ollama-debugsource = 0.12.3-1.fc42 ollama-debugsource(x86-64) = 0.12.3-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Processing files: ollama-debuginfo-0.12.3-1.fc42.x86_64 Provides: debuginfo(build-id) = 1b56adca4c8797ba94bac8d8ca2d21bb993e59df ollama-debuginfo = 0.12.3-1.fc42 ollama-debuginfo(x86-64) = 0.12.3-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc42 Processing files: ollama-ggml-debuginfo-0.12.3-1.fc42.x86_64 Provides: debuginfo(build-id) = 4126c2292aeafa0cf3046b499e0131e1c7fdf77b libggml-base.so-0.12.3-1.fc42.x86_64.debug()(64bit) ollama-ggml-debuginfo = 0.12.3-1.fc42 ollama-ggml-debuginfo(x86-64) = 0.12.3-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc42 Processing files: ollama-ggml-cpu-debuginfo-0.12.3-1.fc42.x86_64 Provides: debuginfo(build-id) = 142b385dd9a4a3a6f982fa16af67305453e5636b debuginfo(build-id) = 20362a1494e7e46526ab58e125b364526bfdada6 debuginfo(build-id) = 2b484d2e5ab653d26411fd59ce45f691ec83c13a debuginfo(build-id) = 4823a5f6b8673283ab7cf2639b29960dbcc9dd40 debuginfo(build-id) = 563a989b93317608f6e0c596cd030e4ce72988bd debuginfo(build-id) = 9e63a564ef02ab38561d3e38703e2f5d9f0d717d debuginfo(build-id) = e605fe6674ff24d300ebc8fe90e4e7cf2501e069 libggml-cpu-alderlake.so-0.12.3-1.fc42.x86_64.debug()(64bit) libggml-cpu-haswell.so-0.12.3-1.fc42.x86_64.debug()(64bit) libggml-cpu-icelake.so-0.12.3-1.fc42.x86_64.debug()(64bit) libggml-cpu-sandybridge.so-0.12.3-1.fc42.x86_64.debug()(64bit) libggml-cpu-skylakex.so-0.12.3-1.fc42.x86_64.debug()(64bit) libggml-cpu-sse42.so-0.12.3-1.fc42.x86_64.debug()(64bit) libggml-cpu-x64.so-0.12.3-1.fc42.x86_64.debug()(64bit) ollama-ggml-cpu-debuginfo = 0.12.3-1.fc42 ollama-ggml-cpu-debuginfo(x86-64) = 0.12.3-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc42 Processing files: ollama-ggml-rocm-debuginfo-0.12.3-1.fc42.x86_64 Provides: debuginfo(build-id) = 6a236d56e2854fc3fc54e060077174660a627432 libggml-hip.so-0.12.3-1.fc42.x86_64.debug()(64bit) ollama-ggml-rocm-debuginfo = 0.12.3-1.fc42 ollama-ggml-rocm-debuginfo(x86-64) = 0.12.3-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.3-1.fc42 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILD/ollama-0.12.3-build/BUILDROOT Wrote: /builddir/build/SRPMS/ollama-0.12.3-1.fc42.src.rpm Wrote: /builddir/build/RPMS/ollama-debugsource-0.12.3-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-0.12.3-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-cpu-0.12.3-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-debuginfo-0.12.3-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-rocm-debuginfo-0.12.3-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-0.12.3-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-cpu-debuginfo-0.12.3-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-debuginfo-0.12.3-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-ggml-rocm-0.12.3-1.fc42.x86_64.rpm Executing(rmbuild): /bin/sh -e /var/tmp/rpm-tmp.4kFxvU + umask 022 + cd /builddir/build/BUILD/ollama-0.12.3-build + test -d /builddir/build/BUILD/ollama-0.12.3-build + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w /builddir/build/BUILD/ollama-0.12.3-build + rm -rf /builddir/build/BUILD/ollama-0.12.3-build + RPM_EC=0 ++ jobs -p + exit 0 RPM build warnings: File listed twice: /usr/share/licenses/ollama Finish: rpmbuild ollama-0.12.3-1.fc42.src.rpm Finish: build phase for ollama-0.12.3-1.fc42.src.rpm INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-42-x86_64-1759552642.867825/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names INFO: Done(/var/lib/copr-rpmbuild/results/ollama-0.12.3-1.fc42.src.rpm) Config(child) 67 minutes 23 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot Finish: run Running RPMResults tool Package info: { "packages": [ { "name": "ollama-ggml-cpu", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama-debugsource", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama-ggml-rocm", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama-ggml-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama-ggml", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama-ggml-rocm-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama-ggml-cpu-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "src" }, { "name": "ollama-debuginfo", "epoch": null, "version": "0.12.3", "release": "1.fc42", "arch": "x86_64" } ] } RPMResults finished