Warning: Permanently added '52.90.145.229' (ED25519) to the list of known hosts. You can reproduce this build on your computer by running: sudo dnf install copr-rpmbuild /usr/bin/copr-rpmbuild --verbose --drop-resultdir --task-url https://copr.fedorainfracloud.org/backend/get-build-task/9734804-fedora-42-x86_64 --chroot fedora-42-x86_64 Version: 1.6 PID: 8647 Logging PID: 8649 Task: {'allow_user_ssh': False, 'appstream': False, 'background': False, 'build_id': 9734804, 'buildroot_pkgs': [], 'chroot': 'fedora-42-x86_64', 'enable_net': True, 'fedora_review': True, 'git_hash': 'c841693ac7a0a45bdacafcbf592a5ac7c2777525', 'git_repo': 'https://copr-dist-git.fedorainfracloud.org/git/mwprado/ollama/ollama', 'isolation': 'default', 'memory_reqs': 2048, 'package_name': 'ollama', 'package_version': '0.12.6-1', 'project_dirname': 'ollama', 'project_name': 'ollama', 'project_owner': 'mwprado', 'repo_priority': None, 'repos': [{'baseurl': 'https://download.copr.fedorainfracloud.org/results/mwprado/ollama/fedora-42-x86_64/', 'id': 'copr_base', 'name': 'Copr repository', 'priority': None}], 'sandbox': 'mwprado/ollama--mwprado', 'source_json': {}, 'source_type': None, 'ssh_public_keys': None, 'storage': 0, 'submitter': 'mwprado', 'tags': [], 'task_id': '9734804-fedora-42-x86_64', 'timeout': 18000, 'uses_devel_repo': False, 'with_opts': [], 'without_opts': []} Running: git clone https://copr-dist-git.fedorainfracloud.org/git/mwprado/ollama/ollama /var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama --depth 500 --no-single-branch --recursive cmd: ['git', 'clone', 'https://copr-dist-git.fedorainfracloud.org/git/mwprado/ollama/ollama', '/var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama', '--depth', '500', '--no-single-branch', '--recursive'] cwd: . rc: 0 stdout: stderr: Cloning into '/var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama'... Running: git checkout c841693ac7a0a45bdacafcbf592a5ac7c2777525 -- cmd: ['git', 'checkout', 'c841693ac7a0a45bdacafcbf592a5ac7c2777525', '--'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama rc: 0 stdout: stderr: Note: switching to 'c841693ac7a0a45bdacafcbf592a5ac7c2777525'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at c841693 automatic import of ollama Running: dist-git-client sources cmd: ['dist-git-client', 'sources'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama rc: 0 stdout: stderr: INFO: Reading stdout from command: git rev-parse --abbrev-ref HEAD INFO: Reading stdout from command: git rev-parse HEAD INFO: Reading sources specification file: sources INFO: Downloading main.zip INFO: Reading stdout from command: curl --help all INFO: Calling: curl -H Pragma: -o main.zip --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/mwprado/ollama/ollama/main.zip/md5/46d16bcec271e6138b6bd41374b3018b/main.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 5607 100 5607 0 0 516k 0 --:--:-- --:--:-- --:--:-- 547k INFO: Reading stdout from command: md5sum main.zip INFO: Downloading v0.12.6.zip INFO: Calling: curl -H Pragma: -o v0.12.6.zip --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/mwprado/ollama/ollama/v0.12.6.zip/md5/0187597a23102057471d76bbdefe47fd/v0.12.6.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 11.2M 100 11.2M 0 0 307M 0 --:--:-- --:--:-- --:--:-- 313M INFO: Reading stdout from command: md5sum v0.12.6.zip tail: /var/lib/copr-rpmbuild/main.log: file truncated Running (timeout=18000): unbuffer mock --spec /var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1761505656.433794 -r /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: mock.py version 6.5 starting (python version = 3.13.7, NVR = mock-6.5-1.fc42), args: /usr/libexec/mock/mock --spec /var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1761505656.433794 -r /var/lib/copr-rpmbuild/results/configs/child.cfg Start(bootstrap): init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish(bootstrap): init plugins Start: init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish: init plugins INFO: Signal handler active Start: run INFO: Start(/var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama/ollama.spec) Config(fedora-42-x86_64) Start: clean chroot Finish: clean chroot Mock Version: 6.5 INFO: Mock Version: 6.5 Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1761505656.433794/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata INFO: Guessed host environment type: unknown INFO: Using container image: registry.fedoraproject.org/fedora:42 INFO: Pulling image: registry.fedoraproject.org/fedora:42 INFO: Tagging container image as mock-bootstrap-b2f575d5-ffc6-4717-981b-5f89f64f9118 INFO: Checking that e8bebb1abd11d0b503c6a3ee814114af85bac1eb1482e231e88e12184cb5bea5 image matches host's architecture INFO: Copy content of container e8bebb1abd11d0b503c6a3ee814114af85bac1eb1482e231e88e12184cb5bea5 to /var/lib/mock/fedora-42-x86_64-bootstrap-1761505656.433794/root INFO: mounting e8bebb1abd11d0b503c6a3ee814114af85bac1eb1482e231e88e12184cb5bea5 with podman image mount INFO: image e8bebb1abd11d0b503c6a3ee814114af85bac1eb1482e231e88e12184cb5bea5 as /var/lib/containers/storage/overlay/e3c3b9e59ec5e00a1b58f43df9cb1318677de587433df7dea8608c8a548a54bf/merged INFO: umounting image e8bebb1abd11d0b503c6a3ee814114af85bac1eb1482e231e88e12184cb5bea5 (/var/lib/containers/storage/overlay/e3c3b9e59ec5e00a1b58f43df9cb1318677de587433df7dea8608c8a548a54bf/merged) with podman image umount INFO: Removing image mock-bootstrap-b2f575d5-ffc6-4717-981b-5f89f64f9118 INFO: Package manager dnf5 detected and used (fallback) INFO: Not updating bootstrap chroot, bootstrap_image_ready=True Start(bootstrap): creating root cache Finish(bootstrap): creating root cache Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-1761505656.433794/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Package manager dnf5 detected and used (direct choice) INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 dnf5-5.2.16.0-1.fc42.x86_64 dnf5-plugins-5.2.16.0-1.fc42.x86_64 Start: installing minimal buildroot with dnf5 Updating and loading repositories: Copr repository 100% | 91.6 KiB/s | 3.8 KiB | 00m00s updates 100% | 18.3 MiB/s | 11.3 MiB | 00m01s fedora 100% | 32.3 MiB/s | 35.4 MiB | 00m01s Repositories loaded. Package Arch Version Repository Size Installing group/module packages: bash x86_64 5.2.37-1.fc42 fedora 8.2 MiB bzip2 x86_64 1.0.8-20.fc42 fedora 99.3 KiB coreutils x86_64 9.6-6.fc42 updates 5.4 MiB cpio x86_64 2.15-4.fc42 fedora 1.1 MiB diffutils x86_64 3.12-1.fc42 updates 1.6 MiB fedora-release-common noarch 42-30 updates 20.2 KiB findutils x86_64 1:4.10.0-5.fc42 fedora 1.9 MiB gawk x86_64 5.3.1-1.fc42 fedora 1.7 MiB glibc-minimal-langpack x86_64 2.41-11.fc42 updates 0.0 B grep x86_64 3.11-10.fc42 fedora 1.0 MiB gzip x86_64 1.13-3.fc42 fedora 392.9 KiB info x86_64 7.2-3.fc42 fedora 357.9 KiB patch x86_64 2.8-1.fc42 updates 222.8 KiB redhat-rpm-config noarch 342-4.fc42 updates 185.5 KiB rpm-build x86_64 4.20.1-1.fc42 fedora 168.7 KiB sed x86_64 4.9-4.fc42 fedora 857.3 KiB shadow-utils x86_64 2:4.17.4-1.fc42 fedora 4.0 MiB tar x86_64 2:1.35-5.fc42 fedora 3.0 MiB unzip x86_64 6.0-66.fc42 fedora 390.3 KiB util-linux x86_64 2.40.4-7.fc42 fedora 3.4 MiB which x86_64 2.23-2.fc42 updates 83.5 KiB xz x86_64 1:5.8.1-2.fc42 updates 1.3 MiB Installing dependencies: add-determinism x86_64 0.6.0-1.fc42 fedora 2.5 MiB alternatives x86_64 1.33-1.fc42 updates 62.2 KiB ansible-srpm-macros noarch 1-17.1.fc42 fedora 35.7 KiB audit-libs x86_64 4.1.1-1.fc42 updates 378.8 KiB basesystem noarch 11-22.fc42 fedora 0.0 B binutils x86_64 2.44-6.fc42 updates 25.8 MiB build-reproducibility-srpm-macros noarch 0.6.0-1.fc42 fedora 735.0 B bzip2-libs x86_64 1.0.8-20.fc42 fedora 84.6 KiB ca-certificates noarch 2025.2.80_v9.0.304-1.0.fc42 updates 2.7 MiB coreutils-common x86_64 9.6-6.fc42 updates 11.1 MiB crypto-policies noarch 20250707-1.gitad370a8.fc42 updates 142.9 KiB curl x86_64 8.11.1-6.fc42 updates 450.6 KiB cyrus-sasl-lib x86_64 2.1.28-30.fc42 fedora 2.3 MiB debugedit x86_64 5.1-7.fc42 updates 192.7 KiB dwz x86_64 0.16-1.fc42 updates 287.1 KiB ed x86_64 1.21-2.fc42 fedora 146.5 KiB efi-srpm-macros noarch 6-3.fc42 updates 40.1 KiB elfutils x86_64 0.193-2.fc42 updates 2.9 MiB elfutils-debuginfod-client x86_64 0.193-2.fc42 updates 83.9 KiB elfutils-default-yama-scope noarch 0.193-2.fc42 updates 1.8 KiB elfutils-libelf x86_64 0.193-2.fc42 updates 1.2 MiB elfutils-libs x86_64 0.193-2.fc42 updates 683.4 KiB fedora-gpg-keys noarch 42-1 fedora 128.2 KiB fedora-release noarch 42-30 updates 0.0 B fedora-release-identity-basic noarch 42-30 updates 646.0 B fedora-repos noarch 42-1 fedora 4.9 KiB file x86_64 5.46-3.fc42 updates 100.2 KiB file-libs x86_64 5.46-3.fc42 updates 11.9 MiB filesystem x86_64 3.18-47.fc42 updates 112.0 B filesystem-srpm-macros noarch 3.18-47.fc42 updates 38.2 KiB fonts-srpm-macros noarch 1:2.0.5-22.fc42 updates 55.8 KiB forge-srpm-macros noarch 0.4.0-2.fc42 fedora 38.9 KiB fpc-srpm-macros noarch 1.3-14.fc42 fedora 144.0 B gdb-minimal x86_64 16.3-1.fc42 updates 13.2 MiB gdbm-libs x86_64 1:1.23-9.fc42 fedora 129.9 KiB ghc-srpm-macros noarch 1.9.2-2.fc42 fedora 779.0 B glibc x86_64 2.41-11.fc42 updates 6.6 MiB glibc-common x86_64 2.41-11.fc42 updates 1.0 MiB glibc-gconv-extra x86_64 2.41-11.fc42 updates 7.2 MiB gmp x86_64 1:6.3.0-4.fc42 fedora 811.3 KiB gnat-srpm-macros noarch 6-7.fc42 fedora 1.0 KiB gnulib-l10n noarch 20241231-1.fc42 updates 655.0 KiB go-srpm-macros noarch 3.8.0-1.fc42 updates 61.9 KiB jansson x86_64 2.14-2.fc42 fedora 93.1 KiB json-c x86_64 0.18-2.fc42 fedora 86.7 KiB kernel-srpm-macros noarch 1.0-25.fc42 fedora 1.9 KiB keyutils-libs x86_64 1.6.3-5.fc42 fedora 58.3 KiB krb5-libs x86_64 1.21.3-6.fc42 updates 2.3 MiB libacl x86_64 2.3.2-3.fc42 fedora 38.3 KiB libarchive x86_64 3.8.1-1.fc42 updates 955.2 KiB libattr x86_64 2.5.2-5.fc42 fedora 27.1 KiB libblkid x86_64 2.40.4-7.fc42 fedora 262.4 KiB libbrotli x86_64 1.1.0-6.fc42 fedora 841.3 KiB libcap x86_64 2.73-2.fc42 fedora 207.1 KiB libcap-ng x86_64 0.8.5-4.fc42 fedora 72.9 KiB libcom_err x86_64 1.47.2-3.fc42 fedora 67.1 KiB libcurl x86_64 8.11.1-6.fc42 updates 834.1 KiB libeconf x86_64 0.7.6-2.fc42 updates 64.6 KiB libevent x86_64 2.1.12-15.fc42 fedora 903.1 KiB libfdisk x86_64 2.40.4-7.fc42 fedora 372.3 KiB libffi x86_64 3.4.6-5.fc42 fedora 82.3 KiB libgcc x86_64 15.2.1-1.fc42 updates 266.6 KiB libgomp x86_64 15.2.1-1.fc42 updates 541.1 KiB libidn2 x86_64 2.3.8-1.fc42 fedora 556.5 KiB libmount x86_64 2.40.4-7.fc42 fedora 356.3 KiB libnghttp2 x86_64 1.64.0-3.fc42 fedora 170.4 KiB libpkgconf x86_64 2.3.0-2.fc42 fedora 78.1 KiB libpsl x86_64 0.21.5-5.fc42 fedora 76.4 KiB libselinux x86_64 3.8-3.fc42 updates 193.1 KiB libsemanage x86_64 3.8.1-2.fc42 updates 304.4 KiB libsepol x86_64 3.8-1.fc42 fedora 826.0 KiB libsmartcols x86_64 2.40.4-7.fc42 fedora 180.4 KiB libssh x86_64 0.11.3-1.fc42 updates 567.1 KiB libssh-config noarch 0.11.3-1.fc42 updates 277.0 B libstdc++ x86_64 15.2.1-1.fc42 updates 2.8 MiB libtasn1 x86_64 4.20.0-1.fc42 fedora 176.3 KiB libtool-ltdl x86_64 2.5.4-4.fc42 fedora 70.1 KiB libunistring x86_64 1.1-9.fc42 fedora 1.7 MiB libuuid x86_64 2.40.4-7.fc42 fedora 37.3 KiB libverto x86_64 0.3.2-10.fc42 fedora 25.4 KiB libxcrypt x86_64 4.4.38-7.fc42 updates 284.5 KiB libxml2 x86_64 2.12.10-1.fc42 fedora 1.7 MiB libzstd x86_64 1.5.7-1.fc42 fedora 807.8 KiB lua-libs x86_64 5.4.8-1.fc42 updates 280.8 KiB lua-srpm-macros noarch 1-15.fc42 fedora 1.3 KiB lz4-libs x86_64 1.10.0-2.fc42 fedora 157.4 KiB mpfr x86_64 4.2.2-1.fc42 fedora 828.8 KiB ncurses-base noarch 6.5-5.20250125.fc42 fedora 326.8 KiB ncurses-libs x86_64 6.5-5.20250125.fc42 fedora 946.3 KiB ocaml-srpm-macros noarch 10-4.fc42 fedora 1.9 KiB openblas-srpm-macros noarch 2-19.fc42 fedora 112.0 B openldap x86_64 2.6.10-1.fc42 updates 655.8 KiB openssl-libs x86_64 1:3.2.6-2.fc42 updates 7.8 MiB p11-kit x86_64 0.25.8-1.fc42 updates 2.3 MiB p11-kit-trust x86_64 0.25.8-1.fc42 updates 446.5 KiB package-notes-srpm-macros noarch 0.5-13.fc42 fedora 1.6 KiB pam-libs x86_64 1.7.0-6.fc42 updates 126.7 KiB pcre2 x86_64 10.45-1.fc42 fedora 697.7 KiB pcre2-syntax noarch 10.45-1.fc42 fedora 273.9 KiB perl-srpm-macros noarch 1-57.fc42 fedora 861.0 B pkgconf x86_64 2.3.0-2.fc42 fedora 88.5 KiB pkgconf-m4 noarch 2.3.0-2.fc42 fedora 14.4 KiB pkgconf-pkg-config x86_64 2.3.0-2.fc42 fedora 989.0 B popt x86_64 1.19-8.fc42 fedora 132.8 KiB publicsuffix-list-dafsa noarch 20250616-1.fc42 updates 69.1 KiB pyproject-srpm-macros noarch 1.18.4-1.fc42 updates 1.9 KiB python-srpm-macros noarch 3.13-5.fc42 updates 51.0 KiB qt5-srpm-macros noarch 5.15.17-1.fc42 updates 500.0 B qt6-srpm-macros noarch 6.9.2-1.fc42 updates 464.0 B readline x86_64 8.2-13.fc42 fedora 485.0 KiB rpm x86_64 4.20.1-1.fc42 fedora 3.1 MiB rpm-build-libs x86_64 4.20.1-1.fc42 fedora 206.6 KiB rpm-libs x86_64 4.20.1-1.fc42 fedora 721.8 KiB rpm-sequoia x86_64 1.7.0-5.fc42 fedora 2.4 MiB rust-srpm-macros noarch 26.4-1.fc42 updates 4.8 KiB setup noarch 2.15.0-13.fc42 fedora 720.9 KiB sqlite-libs x86_64 3.47.2-5.fc42 updates 1.5 MiB systemd-libs x86_64 257.10-1.fc42 updates 2.2 MiB systemd-standalone-sysusers x86_64 257.10-1.fc42 updates 277.3 KiB tree-sitter-srpm-macros noarch 0.1.0-8.fc42 fedora 6.5 KiB util-linux-core x86_64 2.40.4-7.fc42 fedora 1.4 MiB xxhash-libs x86_64 0.8.3-2.fc42 fedora 90.2 KiB xz-libs x86_64 1:5.8.1-2.fc42 updates 217.8 KiB zig-srpm-macros noarch 1-4.fc42 fedora 1.1 KiB zip x86_64 3.0-43.fc42 fedora 698.5 KiB zlib-ng-compat x86_64 2.2.5-2.fc42 updates 137.6 KiB zstd x86_64 1.5.7-1.fc42 fedora 1.7 MiB Installing groups: Buildsystem building group Transaction Summary: Installing: 149 packages Total size of inbound packages is 52 MiB. Need to download 52 MiB. After this operation, 178 MiB extra will be used (install 178 MiB, remove 0 B). [ 1/149] bzip2-0:1.0.8-20.fc42.x86_64 100% | 2.8 MiB/s | 52.1 KiB | 00m00s [ 2/149] cpio-0:2.15-4.fc42.x86_64 100% | 12.5 MiB/s | 294.6 KiB | 00m00s [ 3/149] bash-0:5.2.37-1.fc42.x86_64 100% | 62.3 MiB/s | 1.8 MiB | 00m00s [ 4/149] findutils-1:4.10.0-5.fc42.x86 100% | 49.0 MiB/s | 551.5 KiB | 00m00s [ 5/149] grep-0:3.11-10.fc42.x86_64 100% | 36.6 MiB/s | 300.1 KiB | 00m00s [ 6/149] gzip-0:1.13-3.fc42.x86_64 100% | 27.7 MiB/s | 170.4 KiB | 00m00s [ 7/149] info-0:7.2-3.fc42.x86_64 100% | 35.9 MiB/s | 183.8 KiB | 00m00s [ 8/149] rpm-build-0:4.20.1-1.fc42.x86 100% | 13.3 MiB/s | 81.8 KiB | 00m00s [ 9/149] sed-0:4.9-4.fc42.x86_64 100% | 77.5 MiB/s | 317.3 KiB | 00m00s [ 10/149] shadow-utils-2:4.17.4-1.fc42. 100% | 110.2 MiB/s | 1.3 MiB | 00m00s [ 11/149] unzip-0:6.0-66.fc42.x86_64 100% | 15.0 MiB/s | 184.6 KiB | 00m00s [ 12/149] tar-2:1.35-5.fc42.x86_64 100% | 52.6 MiB/s | 862.5 KiB | 00m00s [ 13/149] fedora-release-common-0:42-30 100% | 8.0 MiB/s | 24.5 KiB | 00m00s [ 14/149] diffutils-0:3.12-1.fc42.x86_6 100% | 54.8 MiB/s | 392.6 KiB | 00m00s [ 15/149] coreutils-0:9.6-6.fc42.x86_64 100% | 75.9 MiB/s | 1.1 MiB | 00m00s [ 16/149] glibc-minimal-langpack-0:2.41 100% | 8.8 MiB/s | 98.7 KiB | 00m00s [ 17/149] gawk-0:5.3.1-1.fc42.x86_64 100% | 71.9 MiB/s | 1.1 MiB | 00m00s [ 18/149] patch-0:2.8-1.fc42.x86_64 100% | 6.5 MiB/s | 113.5 KiB | 00m00s [ 19/149] redhat-rpm-config-0:342-4.fc4 100% | 7.2 MiB/s | 81.1 KiB | 00m00s [ 20/149] util-linux-0:2.40.4-7.fc42.x8 100% | 88.8 MiB/s | 1.2 MiB | 00m00s [ 21/149] which-0:2.23-2.fc42.x86_64 100% | 6.8 MiB/s | 41.7 KiB | 00m00s [ 22/149] xz-1:5.8.1-2.fc42.x86_64 100% | 79.9 MiB/s | 572.6 KiB | 00m00s [ 23/149] bzip2-libs-0:1.0.8-20.fc42.x8 100% | 10.6 MiB/s | 43.6 KiB | 00m00s [ 24/149] ncurses-libs-0:6.5-5.20250125 100% | 54.5 MiB/s | 335.0 KiB | 00m00s [ 25/149] pcre2-0:10.45-1.fc42.x86_64 100% | 64.2 MiB/s | 262.8 KiB | 00m00s [ 26/149] popt-0:1.19-8.fc42.x86_64 100% | 16.1 MiB/s | 65.9 KiB | 00m00s [ 27/149] readline-0:8.2-13.fc42.x86_64 100% | 70.1 MiB/s | 215.2 KiB | 00m00s [ 28/149] rpm-0:4.20.1-1.fc42.x86_64 100% | 133.9 MiB/s | 548.4 KiB | 00m00s [ 29/149] rpm-build-libs-0:4.20.1-1.fc4 100% | 24.3 MiB/s | 99.7 KiB | 00m00s [ 30/149] rpm-libs-0:4.20.1-1.fc42.x86_ 100% | 50.8 MiB/s | 312.0 KiB | 00m00s [ 31/149] libacl-0:2.3.2-3.fc42.x86_64 100% | 3.2 MiB/s | 23.0 KiB | 00m00s [ 32/149] zstd-0:1.5.7-1.fc42.x86_64 100% | 47.4 MiB/s | 485.9 KiB | 00m00s [ 33/149] setup-0:2.15.0-13.fc42.noarch 100% | 19.0 MiB/s | 155.8 KiB | 00m00s [ 34/149] gmp-1:6.3.0-4.fc42.x86_64 100% | 77.6 MiB/s | 317.7 KiB | 00m00s [ 35/149] libattr-0:2.5.2-5.fc42.x86_64 100% | 3.3 MiB/s | 17.1 KiB | 00m00s [ 36/149] fedora-repos-0:42-1.noarch 100% | 3.0 MiB/s | 9.2 KiB | 00m00s [ 37/149] coreutils-common-0:9.6-6.fc42 100% | 138.9 MiB/s | 2.1 MiB | 00m00s [ 38/149] libcap-0:2.73-2.fc42.x86_64 100% | 10.3 MiB/s | 84.3 KiB | 00m00s [ 39/149] mpfr-0:4.2.2-1.fc42.x86_64 100% | 56.2 MiB/s | 345.3 KiB | 00m00s [ 40/149] glibc-common-0:2.41-11.fc42.x 100% | 62.8 MiB/s | 385.6 KiB | 00m00s [ 41/149] ed-0:1.21-2.fc42.x86_64 100% | 16.0 MiB/s | 82.0 KiB | 00m00s [ 42/149] ansible-srpm-macros-0:1-17.1. 100% | 5.0 MiB/s | 20.3 KiB | 00m00s [ 43/149] build-reproducibility-srpm-ma 100% | 5.7 MiB/s | 11.7 KiB | 00m00s [ 44/149] forge-srpm-macros-0:0.4.0-2.f 100% | 9.7 MiB/s | 19.9 KiB | 00m00s [ 45/149] fpc-srpm-macros-0:1.3-14.fc42 100% | 3.9 MiB/s | 8.0 KiB | 00m00s [ 46/149] ghc-srpm-macros-0:1.9.2-2.fc4 100% | 4.5 MiB/s | 9.2 KiB | 00m00s [ 47/149] gnat-srpm-macros-0:6-7.fc42.n 100% | 4.2 MiB/s | 8.6 KiB | 00m00s [ 48/149] kernel-srpm-macros-0:1.0-25.f 100% | 4.8 MiB/s | 9.9 KiB | 00m00s [ 49/149] lua-srpm-macros-0:1-15.fc42.n 100% | 2.9 MiB/s | 8.9 KiB | 00m00s [ 50/149] ocaml-srpm-macros-0:10-4.fc42 100% | 2.2 MiB/s | 9.2 KiB | 00m00s [ 51/149] openblas-srpm-macros-0:2-19.f 100% | 1.9 MiB/s | 7.8 KiB | 00m00s [ 52/149] package-notes-srpm-macros-0:0 100% | 4.5 MiB/s | 9.3 KiB | 00m00s [ 53/149] perl-srpm-macros-0:1-57.fc42. 100% | 2.8 MiB/s | 8.5 KiB | 00m00s [ 54/149] zig-srpm-macros-0:1-4.fc42.no 100% | 4.0 MiB/s | 8.2 KiB | 00m00s [ 55/149] tree-sitter-srpm-macros-0:0.1 100% | 2.7 MiB/s | 11.2 KiB | 00m00s [ 56/149] zip-0:3.0-43.fc42.x86_64 100% | 36.8 MiB/s | 263.5 KiB | 00m00s [ 57/149] libblkid-0:2.40.4-7.fc42.x86_ 100% | 17.1 MiB/s | 122.5 KiB | 00m00s [ 58/149] libcap-ng-0:0.8.5-4.fc42.x86_ 100% | 3.1 MiB/s | 32.2 KiB | 00m00s [ 59/149] libfdisk-0:2.40.4-7.fc42.x86_ 100% | 22.1 MiB/s | 158.5 KiB | 00m00s [ 60/149] libmount-0:2.40.4-7.fc42.x86_ 100% | 25.2 MiB/s | 155.1 KiB | 00m00s [ 61/149] libsmartcols-0:2.40.4-7.fc42. 100% | 13.2 MiB/s | 81.2 KiB | 00m00s [ 62/149] libuuid-0:2.40.4-7.fc42.x86_6 100% | 3.1 MiB/s | 25.3 KiB | 00m00s [ 63/149] util-linux-core-0:2.40.4-7.fc 100% | 47.0 MiB/s | 529.2 KiB | 00m00s [ 64/149] xz-libs-1:5.8.1-2.fc42.x86_64 100% | 12.3 MiB/s | 113.0 KiB | 00m00s [ 65/149] ncurses-base-0:6.5-5.20250125 100% | 6.6 MiB/s | 88.1 KiB | 00m00s [ 66/149] pcre2-syntax-0:10.45-1.fc42.n 100% | 7.5 MiB/s | 161.7 KiB | 00m00s [ 67/149] libzstd-0:1.5.7-1.fc42.x86_64 100% | 13.4 MiB/s | 314.8 KiB | 00m00s [ 68/149] rpm-sequoia-0:1.7.0-5.fc42.x8 100% | 44.5 MiB/s | 911.1 KiB | 00m00s [ 69/149] lz4-libs-0:1.10.0-2.fc42.x86_ 100% | 8.5 MiB/s | 78.1 KiB | 00m00s [ 70/149] gnulib-l10n-0:20241231-1.fc42 100% | 18.3 MiB/s | 150.1 KiB | 00m00s [ 71/149] fedora-gpg-keys-0:42-1.noarch 100% | 33.1 MiB/s | 135.6 KiB | 00m00s [ 72/149] add-determinism-0:0.6.0-1.fc4 100% | 149.5 MiB/s | 918.3 KiB | 00m00s [ 73/149] basesystem-0:11-22.fc42.noarc 100% | 3.6 MiB/s | 7.3 KiB | 00m00s [ 74/149] glibc-gconv-extra-0:2.41-11.f 100% | 126.3 MiB/s | 1.6 MiB | 00m00s [ 75/149] libgcc-0:15.2.1-1.fc42.x86_64 100% | 16.1 MiB/s | 131.6 KiB | 00m00s [ 76/149] glibc-0:2.41-11.fc42.x86_64 100% | 132.1 MiB/s | 2.2 MiB | 00m00s [ 77/149] zlib-ng-compat-0:2.2.5-2.fc42 100% | 15.5 MiB/s | 79.2 KiB | 00m00s [ 78/149] libstdc++-0:15.2.1-1.fc42.x86 100% | 112.0 MiB/s | 917.8 KiB | 00m00s [ 79/149] libselinux-0:3.8-3.fc42.x86_6 100% | 18.9 MiB/s | 96.7 KiB | 00m00s [ 80/149] filesystem-0:3.18-47.fc42.x86 100% | 121.2 MiB/s | 1.3 MiB | 00m00s [ 81/149] libsepol-0:3.8-1.fc42.x86_64 100% | 48.7 MiB/s | 348.9 KiB | 00m00s [ 82/149] libxcrypt-0:4.4.38-7.fc42.x86 100% | 24.8 MiB/s | 127.2 KiB | 00m00s [ 83/149] audit-libs-0:4.1.1-1.fc42.x86 100% | 33.8 MiB/s | 138.5 KiB | 00m00s [ 84/149] systemd-libs-0:257.10-1.fc42. 100% | 112.9 MiB/s | 809.6 KiB | 00m00s [ 85/149] pam-libs-0:1.7.0-6.fc42.x86_6 100% | 11.2 MiB/s | 57.5 KiB | 00m00s [ 86/149] libeconf-0:0.7.6-2.fc42.x86_6 100% | 11.4 MiB/s | 35.2 KiB | 00m00s [ 87/149] libsemanage-0:3.8.1-2.fc42.x8 100% | 30.1 MiB/s | 123.2 KiB | 00m00s [ 88/149] lua-libs-0:5.4.8-1.fc42.x86_6 100% | 12.9 MiB/s | 131.9 KiB | 00m00s [ 89/149] sqlite-libs-0:3.47.2-5.fc42.x 100% | 66.9 MiB/s | 753.8 KiB | 00m00s [ 90/149] openssl-libs-1:3.2.6-2.fc42.x 100% | 129.7 MiB/s | 2.3 MiB | 00m00s [ 91/149] elfutils-libelf-0:0.193-2.fc4 100% | 25.4 MiB/s | 207.8 KiB | 00m00s [ 92/149] elfutils-libs-0:0.193-2.fc42. 100% | 52.8 MiB/s | 270.2 KiB | 00m00s [ 93/149] elfutils-0:0.193-2.fc42.x86_6 100% | 111.6 MiB/s | 571.4 KiB | 00m00s [ 94/149] json-c-0:0.18-2.fc42.x86_64 100% | 14.6 MiB/s | 44.9 KiB | 00m00s [ 95/149] elfutils-debuginfod-client-0: 100% | 11.5 MiB/s | 46.9 KiB | 00m00s [ 96/149] file-0:5.46-3.fc42.x86_64 100% | 2.6 MiB/s | 48.6 KiB | 00m00s [ 97/149] file-libs-0:5.46-3.fc42.x86_6 100% | 41.5 MiB/s | 849.5 KiB | 00m00s [ 98/149] libgomp-0:15.2.1-1.fc42.x86_6 100% | 13.4 MiB/s | 371.6 KiB | 00m00s [ 99/149] jansson-0:2.14-2.fc42.x86_64 100% | 4.5 MiB/s | 45.7 KiB | 00m00s [100/149] debugedit-0:5.1-7.fc42.x86_64 100% | 12.8 MiB/s | 78.8 KiB | 00m00s [101/149] libarchive-0:3.8.1-1.fc42.x86 100% | 51.5 MiB/s | 421.6 KiB | 00m00s [102/149] libxml2-0:2.12.10-1.fc42.x86_ 100% | 83.5 MiB/s | 683.7 KiB | 00m00s [103/149] pkgconf-pkg-config-0:2.3.0-2. 100% | 2.4 MiB/s | 9.9 KiB | 00m00s [104/149] pkgconf-0:2.3.0-2.fc42.x86_64 100% | 8.8 MiB/s | 44.9 KiB | 00m00s [105/149] pkgconf-m4-0:2.3.0-2.fc42.noa 100% | 3.5 MiB/s | 14.2 KiB | 00m00s [106/149] binutils-0:2.44-6.fc42.x86_64 100% | 148.0 MiB/s | 5.8 MiB | 00m00s [107/149] libpkgconf-0:2.3.0-2.fc42.x86 100% | 4.2 MiB/s | 38.4 KiB | 00m00s [108/149] curl-0:8.11.1-6.fc42.x86_64 100% | 23.9 MiB/s | 220.0 KiB | 00m00s [109/149] efi-srpm-macros-0:6-3.fc42.no 100% | 3.7 MiB/s | 22.5 KiB | 00m00s [110/149] dwz-0:0.16-1.fc42.x86_64 100% | 18.9 MiB/s | 135.5 KiB | 00m00s [111/149] filesystem-srpm-macros-0:3.18 100% | 2.5 MiB/s | 26.1 KiB | 00m00s [112/149] fonts-srpm-macros-1:2.0.5-22. 100% | 5.3 MiB/s | 27.2 KiB | 00m00s [113/149] go-srpm-macros-0:3.8.0-1.fc42 100% | 6.9 MiB/s | 28.3 KiB | 00m00s [114/149] qt5-srpm-macros-0:5.15.17-1.f 100% | 4.3 MiB/s | 8.7 KiB | 00m00s [115/149] pyproject-srpm-macros-0:1.18. 100% | 4.5 MiB/s | 13.7 KiB | 00m00s [116/149] python-srpm-macros-0:3.13-5.f 100% | 7.3 MiB/s | 22.5 KiB | 00m00s [117/149] qt6-srpm-macros-0:6.9.2-1.fc4 100% | 3.1 MiB/s | 9.4 KiB | 00m00s [118/149] rust-srpm-macros-0:26.4-1.fc4 100% | 2.7 MiB/s | 11.2 KiB | 00m00s [119/149] ca-certificates-0:2025.2.80_v 100% | 105.6 MiB/s | 973.5 KiB | 00m00s [120/149] elfutils-default-yama-scope-0 100% | 3.1 MiB/s | 12.6 KiB | 00m00s [121/149] crypto-policies-0:20250707-1. 100% | 13.4 MiB/s | 96.0 KiB | 00m00s [122/149] p11-kit-0:0.25.8-1.fc42.x86_6 100% | 98.3 MiB/s | 503.5 KiB | 00m00s [123/149] libtasn1-0:4.20.0-1.fc42.x86_ 100% | 18.3 MiB/s | 75.0 KiB | 00m00s [124/149] libffi-0:3.4.6-5.fc42.x86_64 100% | 5.6 MiB/s | 39.9 KiB | 00m00s [125/149] alternatives-0:1.33-1.fc42.x8 100% | 13.2 MiB/s | 40.5 KiB | 00m00s [126/149] p11-kit-trust-0:0.25.8-1.fc42 100% | 34.0 MiB/s | 139.2 KiB | 00m00s [127/149] fedora-release-0:42-30.noarch 100% | 4.4 MiB/s | 13.5 KiB | 00m00s [128/149] systemd-standalone-sysusers-0 100% | 30.0 MiB/s | 153.7 KiB | 00m00s [129/149] xxhash-libs-0:0.8.3-2.fc42.x8 100% | 5.5 MiB/s | 39.1 KiB | 00m00s [130/149] fedora-release-identity-basic 100% | 1.6 MiB/s | 14.3 KiB | 00m00s [131/149] libcurl-0:8.11.1-6.fc42.x86_6 100% | 60.5 MiB/s | 371.7 KiB | 00m00s [132/149] libssh-0:0.11.3-1.fc42.x86_64 100% | 32.5 MiB/s | 233.0 KiB | 00m00s [133/149] gdb-minimal-0:16.3-1.fc42.x86 100% | 169.5 MiB/s | 4.4 MiB | 00m00s [134/149] libbrotli-0:1.1.0-6.fc42.x86_ 100% | 30.2 MiB/s | 339.8 KiB | 00m00s [135/149] libidn2-0:2.3.8-1.fc42.x86_64 100% | 24.4 MiB/s | 174.8 KiB | 00m00s [136/149] libnghttp2-0:1.64.0-3.fc42.x8 100% | 19.0 MiB/s | 77.7 KiB | 00m00s [137/149] libpsl-0:0.21.5-5.fc42.x86_64 100% | 15.6 MiB/s | 64.0 KiB | 00m00s [138/149] libssh-config-0:0.11.3-1.fc42 100% | 2.2 MiB/s | 9.1 KiB | 00m00s [139/149] publicsuffix-list-dafsa-0:202 100% | 28.9 MiB/s | 59.2 KiB | 00m00s [140/149] libunistring-0:1.1-9.fc42.x86 100% | 106.0 MiB/s | 542.5 KiB | 00m00s [141/149] keyutils-libs-0:1.6.3-5.fc42. 100% | 5.1 MiB/s | 31.5 KiB | 00m00s [142/149] krb5-libs-0:1.21.3-6.fc42.x86 100% | 92.7 MiB/s | 759.8 KiB | 00m00s [143/149] libcom_err-0:1.47.2-3.fc42.x8 100% | 4.4 MiB/s | 26.9 KiB | 00m00s [144/149] libverto-0:0.3.2-10.fc42.x86_ 100% | 6.8 MiB/s | 20.8 KiB | 00m00s [145/149] openldap-0:2.6.10-1.fc42.x86_ 100% | 63.1 MiB/s | 258.6 KiB | 00m00s [146/149] libevent-0:2.1.12-15.fc42.x86 100% | 50.8 MiB/s | 260.2 KiB | 00m00s [147/149] cyrus-sasl-lib-0:2.1.28-30.fc 100% | 110.7 MiB/s | 793.5 KiB | 00m00s [148/149] libtool-ltdl-0:2.5.4-4.fc42.x 100% | 8.8 MiB/s | 36.2 KiB | 00m00s [149/149] gdbm-libs-1:1.23-9.fc42.x86_6 100% | 27.8 MiB/s | 57.0 KiB | 00m00s -------------------------------------------------------------------------------- [149/149] Total 100% | 124.6 MiB/s | 52.4 MiB | 00m00s Running transaction Importing OpenPGP key 0x105EF944: UserID : "Fedora (42) " Fingerprint: B0F4950458F69E1150C6C5EDC8AC4916105EF944 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-42-primary The key was successfully imported. [ 1/151] Verify package files 100% | 846.0 B/s | 149.0 B | 00m00s [ 2/151] Prepare transaction 100% | 4.2 KiB/s | 149.0 B | 00m00s [ 3/151] Installing libgcc-0:15.2.1-1. 100% | 261.9 MiB/s | 268.2 KiB | 00m00s [ 4/151] Installing publicsuffix-list- 100% | 0.0 B/s | 69.8 KiB | 00m00s [ 5/151] Installing libssh-config-0:0. 100% | 0.0 B/s | 816.0 B | 00m00s [ 6/151] Installing fedora-release-ide 100% | 0.0 B/s | 904.0 B | 00m00s [ 7/151] Installing fedora-gpg-keys-0: 100% | 56.9 MiB/s | 174.8 KiB | 00m00s [ 8/151] Installing fedora-repos-0:42- 100% | 0.0 B/s | 5.7 KiB | 00m00s [ 9/151] Installing fedora-release-com 100% | 23.9 MiB/s | 24.5 KiB | 00m00s [ 10/151] Installing fedora-release-0:4 100% | 9.3 KiB/s | 124.0 B | 00m00s >>> Running sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Scriptlet output: >>> Creating group 'adm' with GID 4. >>> Creating group 'audio' with GID 63. >>> Creating group 'bin' with GID 1. >>> Creating group 'cdrom' with GID 11. >>> Creating group 'clock' with GID 103. >>> Creating group 'daemon' with GID 2. >>> Creating group 'dialout' with GID 18. >>> Creating group 'disk' with GID 6. >>> Creating group 'floppy' with GID 19. >>> Creating group 'ftp' with GID 50. >>> Creating group 'games' with GID 20. >>> Creating group 'input' with GID 104. >>> Creating group 'kmem' with GID 9. >>> Creating group 'kvm' with GID 36. >>> Creating group 'lock' with GID 54. >>> Creating group 'lp' with GID 7. >>> Creating group 'mail' with GID 12. >>> Creating group 'man' with GID 15. >>> Creating group 'mem' with GID 8. >>> Creating group 'nobody' with GID 65534. >>> Creating group 'render' with GID 105. >>> Creating group 'root' with GID 0. >>> Creating group 'sgx' with GID 106. >>> Creating group 'sys' with GID 3. >>> Creating group 'tape' with GID 33. >>> Creating group 'tty' with GID 5. >>> Creating group 'users' with GID 100. >>> Creating group 'utmp' with GID 22. >>> Creating group 'video' with GID 39. >>> Creating group 'wheel' with GID 10. >>> >>> Running sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Scriptlet output: >>> Creating user 'adm' (adm) with UID 3 and GID 4. >>> Creating user 'bin' (bin) with UID 1 and GID 1. >>> Creating user 'daemon' (daemon) with UID 2 and GID 2. >>> Creating user 'ftp' (FTP User) with UID 14 and GID 50. >>> Creating user 'games' (games) with UID 12 and GID 20. >>> Creating user 'halt' (halt) with UID 7 and GID 0. >>> Creating user 'lp' (lp) with UID 4 and GID 7. >>> Creating user 'mail' (mail) with UID 8 and GID 12. >>> Creating user 'nobody' (Kernel Overflow User) with UID 65534 and GID 65534. >>> Creating user 'operator' (operator) with UID 11 and GID 0. >>> Creating user 'root' (Super User) with UID 0 and GID 0. >>> Creating user 'shutdown' (shutdown) with UID 6 and GID 0. >>> Creating user 'sync' (sync) with UID 5 and GID 0. >>> [ 11/151] Installing setup-0:2.15.0-13. 100% | 50.7 MiB/s | 726.7 KiB | 00m00s [ 12/151] Installing filesystem-0:3.18- 100% | 2.8 MiB/s | 212.8 KiB | 00m00s [ 13/151] Installing basesystem-0:11-22 100% | 0.0 B/s | 124.0 B | 00m00s [ 14/151] Installing rust-srpm-macros-0 100% | 0.0 B/s | 5.6 KiB | 00m00s [ 15/151] Installing qt6-srpm-macros-0: 100% | 0.0 B/s | 740.0 B | 00m00s [ 16/151] Installing qt5-srpm-macros-0: 100% | 0.0 B/s | 776.0 B | 00m00s [ 17/151] Installing pkgconf-m4-0:2.3.0 100% | 0.0 B/s | 14.8 KiB | 00m00s [ 18/151] Installing gnulib-l10n-0:2024 100% | 215.5 MiB/s | 661.9 KiB | 00m00s [ 19/151] Installing coreutils-common-0 100% | 429.0 MiB/s | 11.2 MiB | 00m00s [ 20/151] Installing pcre2-syntax-0:10. 100% | 269.9 MiB/s | 276.4 KiB | 00m00s [ 21/151] Installing ncurses-base-0:6.5 100% | 86.0 MiB/s | 352.2 KiB | 00m00s [ 22/151] Installing glibc-minimal-lang 100% | 0.0 B/s | 124.0 B | 00m00s [ 23/151] Installing ncurses-libs-0:6.5 100% | 232.6 MiB/s | 952.8 KiB | 00m00s [ 24/151] Installing glibc-0:2.41-11.fc 100% | 214.6 MiB/s | 6.7 MiB | 00m00s [ 25/151] Installing bash-0:5.2.37-1.fc 100% | 281.7 MiB/s | 8.2 MiB | 00m00s [ 26/151] Installing glibc-common-0:2.4 100% | 60.0 MiB/s | 1.0 MiB | 00m00s [ 27/151] Installing glibc-gconv-extra- 100% | 281.1 MiB/s | 7.3 MiB | 00m00s [ 28/151] Installing zlib-ng-compat-0:2 100% | 135.2 MiB/s | 138.4 KiB | 00m00s [ 29/151] Installing bzip2-libs-0:1.0.8 100% | 0.0 B/s | 85.7 KiB | 00m00s [ 30/151] Installing xz-libs-1:5.8.1-2. 100% | 213.8 MiB/s | 218.9 KiB | 00m00s [ 31/151] Installing libuuid-0:2.40.4-7 100% | 0.0 B/s | 38.4 KiB | 00m00s [ 32/151] Installing libblkid-0:2.40.4- 100% | 257.4 MiB/s | 263.5 KiB | 00m00s [ 33/151] Installing popt-0:1.19-8.fc42 100% | 68.1 MiB/s | 139.4 KiB | 00m00s [ 34/151] Installing readline-0:8.2-13. 100% | 237.9 MiB/s | 487.1 KiB | 00m00s [ 35/151] Installing gmp-1:6.3.0-4.fc42 100% | 397.2 MiB/s | 813.5 KiB | 00m00s [ 36/151] Installing libzstd-0:1.5.7-1. 100% | 395.1 MiB/s | 809.1 KiB | 00m00s [ 37/151] Installing elfutils-libelf-0: 100% | 388.8 MiB/s | 1.2 MiB | 00m00s [ 38/151] Installing libstdc++-0:15.2.1 100% | 405.2 MiB/s | 2.8 MiB | 00m00s [ 39/151] Installing libxcrypt-0:4.4.38 100% | 280.4 MiB/s | 287.2 KiB | 00m00s [ 40/151] Installing libattr-0:2.5.2-5. 100% | 0.0 B/s | 28.1 KiB | 00m00s [ 41/151] Installing libacl-0:2.3.2-3.f 100% | 0.0 B/s | 39.2 KiB | 00m00s [ 42/151] Installing dwz-0:0.16-1.fc42. 100% | 21.7 MiB/s | 288.5 KiB | 00m00s [ 43/151] Installing mpfr-0:4.2.2-1.fc4 100% | 270.3 MiB/s | 830.4 KiB | 00m00s [ 44/151] Installing gawk-0:5.3.1-1.fc4 100% | 99.7 MiB/s | 1.7 MiB | 00m00s [ 45/151] Installing unzip-0:6.0-66.fc4 100% | 29.6 MiB/s | 393.8 KiB | 00m00s [ 46/151] Installing file-libs-0:5.46-3 100% | 741.1 MiB/s | 11.9 MiB | 00m00s [ 47/151] Installing file-0:5.46-3.fc42 100% | 5.2 MiB/s | 101.7 KiB | 00m00s [ 48/151] Installing crypto-policies-0: 100% | 41.0 MiB/s | 167.8 KiB | 00m00s [ 49/151] Installing pcre2-0:10.45-1.fc 100% | 341.4 MiB/s | 699.1 KiB | 00m00s [ 50/151] Installing grep-0:3.11-10.fc4 100% | 59.0 MiB/s | 1.0 MiB | 00m00s [ 51/151] Installing xz-1:5.8.1-2.fc42. 100% | 78.3 MiB/s | 1.3 MiB | 00m00s [ 52/151] Installing libcap-ng-0:0.8.5- 100% | 73.1 MiB/s | 74.8 KiB | 00m00s [ 53/151] Installing audit-libs-0:4.1.1 100% | 186.3 MiB/s | 381.5 KiB | 00m00s [ 54/151] Installing libsmartcols-0:2.4 100% | 177.3 MiB/s | 181.5 KiB | 00m00s [ 55/151] Installing lz4-libs-0:1.10.0- 100% | 154.7 MiB/s | 158.5 KiB | 00m00s [ 56/151] Installing libsepol-0:3.8-1.f 100% | 403.8 MiB/s | 827.0 KiB | 00m00s [ 57/151] Installing libselinux-0:3.8-3 100% | 189.8 MiB/s | 194.3 KiB | 00m00s [ 58/151] Installing findutils-1:4.10.0 100% | 110.2 MiB/s | 1.9 MiB | 00m00s [ 59/151] Installing sed-0:4.9-4.fc42.x 100% | 56.3 MiB/s | 865.5 KiB | 00m00s [ 60/151] Installing libmount-0:2.40.4- 100% | 348.9 MiB/s | 357.3 KiB | 00m00s [ 61/151] Installing libeconf-0:0.7.6-2 100% | 64.7 MiB/s | 66.2 KiB | 00m00s [ 62/151] Installing pam-libs-0:1.7.0-6 100% | 126.1 MiB/s | 129.1 KiB | 00m00s [ 63/151] Installing libcap-0:2.73-2.fc 100% | 15.9 MiB/s | 212.1 KiB | 00m00s [ 64/151] Installing systemd-libs-0:257 100% | 372.7 MiB/s | 2.2 MiB | 00m00s [ 65/151] Installing lua-libs-0:5.4.8-1 100% | 275.4 MiB/s | 282.0 KiB | 00m00s [ 66/151] Installing libffi-0:3.4.6-5.f 100% | 0.0 B/s | 83.7 KiB | 00m00s [ 67/151] Installing libtasn1-0:4.20.0- 100% | 173.9 MiB/s | 178.1 KiB | 00m00s [ 68/151] Installing p11-kit-0:0.25.8-1 100% | 120.5 MiB/s | 2.3 MiB | 00m00s [ 69/151] Installing alternatives-0:1.3 100% | 5.2 MiB/s | 63.8 KiB | 00m00s [ 70/151] Installing libunistring-0:1.1 100% | 345.3 MiB/s | 1.7 MiB | 00m00s [ 71/151] Installing libidn2-0:2.3.8-1. 100% | 183.2 MiB/s | 562.7 KiB | 00m00s [ 72/151] Installing libpsl-0:0.21.5-5. 100% | 0.0 B/s | 77.5 KiB | 00m00s [ 73/151] Installing p11-kit-trust-0:0. 100% | 21.9 MiB/s | 448.3 KiB | 00m00s [ 74/151] Installing openssl-libs-1:3.2 100% | 391.2 MiB/s | 7.8 MiB | 00m00s [ 75/151] Installing coreutils-0:9.6-6. 100% | 175.9 MiB/s | 5.5 MiB | 00m00s [ 76/151] Installing ca-certificates-0: 100% | 2.0 MiB/s | 2.5 MiB | 00m01s [ 77/151] Installing gzip-0:1.13-3.fc42 100% | 27.8 MiB/s | 398.4 KiB | 00m00s [ 78/151] Installing rpm-sequoia-0:1.7. 100% | 402.4 MiB/s | 2.4 MiB | 00m00s [ 79/151] Installing libevent-0:2.1.12- 100% | 295.2 MiB/s | 906.9 KiB | 00m00s [ 80/151] Installing util-linux-core-0: 100% | 79.2 MiB/s | 1.4 MiB | 00m00s [ 81/151] Installing systemd-standalone 100% | 20.9 MiB/s | 277.8 KiB | 00m00s [ 82/151] Installing tar-2:1.35-5.fc42. 100% | 148.1 MiB/s | 3.0 MiB | 00m00s [ 83/151] Installing libsemanage-0:3.8. 100% | 149.5 MiB/s | 306.2 KiB | 00m00s [ 84/151] Installing shadow-utils-2:4.1 100% | 144.4 MiB/s | 4.0 MiB | 00m00s [ 85/151] Installing zstd-0:1.5.7-1.fc4 100% | 106.9 MiB/s | 1.7 MiB | 00m00s [ 86/151] Installing zip-0:3.0-43.fc42. 100% | 49.0 MiB/s | 702.4 KiB | 00m00s [ 87/151] Installing libfdisk-0:2.40.4- 100% | 364.7 MiB/s | 373.4 KiB | 00m00s [ 88/151] Installing libxml2-0:2.12.10- 100% | 106.0 MiB/s | 1.7 MiB | 00m00s [ 89/151] Installing libarchive-0:3.8.1 100% | 311.6 MiB/s | 957.1 KiB | 00m00s [ 90/151] Installing bzip2-0:1.0.8-20.f 100% | 8.5 MiB/s | 103.8 KiB | 00m00s [ 91/151] Installing add-determinism-0: 100% | 137.0 MiB/s | 2.5 MiB | 00m00s [ 92/151] Installing build-reproducibil 100% | 0.0 B/s | 1.0 KiB | 00m00s [ 93/151] Installing sqlite-libs-0:3.47 100% | 378.1 MiB/s | 1.5 MiB | 00m00s [ 94/151] Installing rpm-libs-0:4.20.1- 100% | 353.2 MiB/s | 723.4 KiB | 00m00s [ 95/151] Installing ed-0:1.21-2.fc42.x 100% | 12.1 MiB/s | 148.8 KiB | 00m00s [ 96/151] Installing patch-0:2.8-1.fc42 100% | 18.3 MiB/s | 224.3 KiB | 00m00s [ 97/151] Installing filesystem-srpm-ma 100% | 0.0 B/s | 38.9 KiB | 00m00s [ 98/151] Installing elfutils-default-y 100% | 408.6 KiB/s | 2.0 KiB | 00m00s [ 99/151] Installing elfutils-libs-0:0. 100% | 223.1 MiB/s | 685.2 KiB | 00m00s [100/151] Installing cpio-0:2.15-4.fc42 100% | 68.7 MiB/s | 1.1 MiB | 00m00s [101/151] Installing diffutils-0:3.12-1 100% | 91.8 MiB/s | 1.6 MiB | 00m00s [102/151] Installing json-c-0:0.18-2.fc 100% | 85.9 MiB/s | 88.0 KiB | 00m00s [103/151] Installing libgomp-0:15.2.1-1 100% | 264.9 MiB/s | 542.5 KiB | 00m00s [104/151] Installing rpm-build-libs-0:4 100% | 202.5 MiB/s | 207.4 KiB | 00m00s [105/151] Installing jansson-0:2.14-2.f 100% | 92.2 MiB/s | 94.4 KiB | 00m00s [106/151] Installing libpkgconf-0:2.3.0 100% | 0.0 B/s | 79.2 KiB | 00m00s [107/151] Installing pkgconf-0:2.3.0-2. 100% | 7.4 MiB/s | 91.0 KiB | 00m00s [108/151] Installing pkgconf-pkg-config 100% | 147.8 KiB/s | 1.8 KiB | 00m00s [109/151] Installing xxhash-libs-0:0.8. 100% | 89.4 MiB/s | 91.6 KiB | 00m00s [110/151] Installing libbrotli-0:1.1.0- 100% | 274.6 MiB/s | 843.6 KiB | 00m00s [111/151] Installing libnghttp2-0:1.64. 100% | 167.5 MiB/s | 171.5 KiB | 00m00s [112/151] Installing keyutils-libs-0:1. 100% | 58.3 MiB/s | 59.7 KiB | 00m00s [113/151] Installing libcom_err-0:1.47. 100% | 0.0 B/s | 68.2 KiB | 00m00s [114/151] Installing libverto-0:0.3.2-1 100% | 26.6 MiB/s | 27.2 KiB | 00m00s [115/151] Installing krb5-libs-0:1.21.3 100% | 327.4 MiB/s | 2.3 MiB | 00m00s [116/151] Installing libssh-0:0.11.3-1. 100% | 277.9 MiB/s | 569.2 KiB | 00m00s [117/151] Installing libtool-ltdl-0:2.5 100% | 69.6 MiB/s | 71.2 KiB | 00m00s [118/151] Installing gdbm-libs-1:1.23-9 100% | 128.5 MiB/s | 131.6 KiB | 00m00s [119/151] Installing cyrus-sasl-lib-0:2 100% | 128.0 MiB/s | 2.3 MiB | 00m00s [120/151] Installing openldap-0:2.6.10- 100% | 214.7 MiB/s | 659.6 KiB | 00m00s [121/151] Installing libcurl-0:8.11.1-6 100% | 271.9 MiB/s | 835.2 KiB | 00m00s [122/151] Installing elfutils-debuginfo 100% | 6.5 MiB/s | 86.2 KiB | 00m00s [123/151] Installing elfutils-0:0.193-2 100% | 153.8 MiB/s | 2.9 MiB | 00m00s [124/151] Installing binutils-0:2.44-6. 100% | 335.6 MiB/s | 25.8 MiB | 00m00s [125/151] Installing gdb-minimal-0:16.3 100% | 301.1 MiB/s | 13.2 MiB | 00m00s [126/151] Installing debugedit-0:5.1-7. 100% | 14.7 MiB/s | 195.4 KiB | 00m00s [127/151] Installing curl-0:8.11.1-6.fc 100% | 21.1 MiB/s | 453.1 KiB | 00m00s [128/151] Installing rpm-0:4.20.1-1.fc4 100% | 99.9 MiB/s | 2.5 MiB | 00m00s [129/151] Installing lua-srpm-macros-0: 100% | 0.0 B/s | 1.9 KiB | 00m00s [130/151] Installing tree-sitter-srpm-m 100% | 0.0 B/s | 7.4 KiB | 00m00s [131/151] Installing zig-srpm-macros-0: 100% | 0.0 B/s | 1.7 KiB | 00m00s [132/151] Installing efi-srpm-macros-0: 100% | 0.0 B/s | 41.1 KiB | 00m00s [133/151] Installing perl-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [134/151] Installing package-notes-srpm 100% | 0.0 B/s | 2.0 KiB | 00m00s [135/151] Installing openblas-srpm-macr 100% | 0.0 B/s | 392.0 B | 00m00s [136/151] Installing ocaml-srpm-macros- 100% | 0.0 B/s | 2.2 KiB | 00m00s [137/151] Installing kernel-srpm-macros 100% | 0.0 B/s | 2.3 KiB | 00m00s [138/151] Installing gnat-srpm-macros-0 100% | 0.0 B/s | 1.3 KiB | 00m00s [139/151] Installing ghc-srpm-macros-0: 100% | 0.0 B/s | 1.0 KiB | 00m00s [140/151] Installing fpc-srpm-macros-0: 100% | 0.0 B/s | 420.0 B | 00m00s [141/151] Installing ansible-srpm-macro 100% | 0.0 B/s | 36.2 KiB | 00m00s [142/151] Installing forge-srpm-macros- 100% | 0.0 B/s | 40.3 KiB | 00m00s [143/151] Installing fonts-srpm-macros- 100% | 0.0 B/s | 57.0 KiB | 00m00s [144/151] Installing go-srpm-macros-0:3 100% | 0.0 B/s | 63.0 KiB | 00m00s [145/151] Installing python-srpm-macros 100% | 50.9 MiB/s | 52.2 KiB | 00m00s [146/151] Installing redhat-rpm-config- 100% | 93.9 MiB/s | 192.2 KiB | 00m00s [147/151] Installing rpm-build-0:4.20.1 100% | 12.4 MiB/s | 177.4 KiB | 00m00s [148/151] Installing pyproject-srpm-mac 100% | 2.4 MiB/s | 2.5 KiB | 00m00s [149/151] Installing util-linux-0:2.40. 100% | 104.9 MiB/s | 3.5 MiB | 00m00s [150/151] Installing which-0:2.23-2.fc4 100% | 6.4 MiB/s | 85.7 KiB | 00m00s [151/151] Installing info-0:7.2-3.fc42. 100% | 228.1 KiB/s | 358.3 KiB | 00m02s Complete! Finish: installing minimal buildroot with dnf5 Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: INFO: add-determinism-0.6.0-1.fc42.x86_64 alternatives-1.33-1.fc42.x86_64 ansible-srpm-macros-1-17.1.fc42.noarch audit-libs-4.1.1-1.fc42.x86_64 basesystem-11-22.fc42.noarch bash-5.2.37-1.fc42.x86_64 binutils-2.44-6.fc42.x86_64 build-reproducibility-srpm-macros-0.6.0-1.fc42.noarch bzip2-1.0.8-20.fc42.x86_64 bzip2-libs-1.0.8-20.fc42.x86_64 ca-certificates-2025.2.80_v9.0.304-1.0.fc42.noarch coreutils-9.6-6.fc42.x86_64 coreutils-common-9.6-6.fc42.x86_64 cpio-2.15-4.fc42.x86_64 crypto-policies-20250707-1.gitad370a8.fc42.noarch curl-8.11.1-6.fc42.x86_64 cyrus-sasl-lib-2.1.28-30.fc42.x86_64 debugedit-5.1-7.fc42.x86_64 diffutils-3.12-1.fc42.x86_64 dwz-0.16-1.fc42.x86_64 ed-1.21-2.fc42.x86_64 efi-srpm-macros-6-3.fc42.noarch elfutils-0.193-2.fc42.x86_64 elfutils-debuginfod-client-0.193-2.fc42.x86_64 elfutils-default-yama-scope-0.193-2.fc42.noarch elfutils-libelf-0.193-2.fc42.x86_64 elfutils-libs-0.193-2.fc42.x86_64 fedora-gpg-keys-42-1.noarch fedora-release-42-30.noarch fedora-release-common-42-30.noarch fedora-release-identity-basic-42-30.noarch fedora-repos-42-1.noarch file-5.46-3.fc42.x86_64 file-libs-5.46-3.fc42.x86_64 filesystem-3.18-47.fc42.x86_64 filesystem-srpm-macros-3.18-47.fc42.noarch findutils-4.10.0-5.fc42.x86_64 fonts-srpm-macros-2.0.5-22.fc42.noarch forge-srpm-macros-0.4.0-2.fc42.noarch fpc-srpm-macros-1.3-14.fc42.noarch gawk-5.3.1-1.fc42.x86_64 gdb-minimal-16.3-1.fc42.x86_64 gdbm-libs-1.23-9.fc42.x86_64 ghc-srpm-macros-1.9.2-2.fc42.noarch glibc-2.41-11.fc42.x86_64 glibc-common-2.41-11.fc42.x86_64 glibc-gconv-extra-2.41-11.fc42.x86_64 glibc-minimal-langpack-2.41-11.fc42.x86_64 gmp-6.3.0-4.fc42.x86_64 gnat-srpm-macros-6-7.fc42.noarch gnulib-l10n-20241231-1.fc42.noarch go-srpm-macros-3.8.0-1.fc42.noarch gpg-pubkey-105ef944-65ca83d1 grep-3.11-10.fc42.x86_64 gzip-1.13-3.fc42.x86_64 info-7.2-3.fc42.x86_64 jansson-2.14-2.fc42.x86_64 json-c-0.18-2.fc42.x86_64 kernel-srpm-macros-1.0-25.fc42.noarch keyutils-libs-1.6.3-5.fc42.x86_64 krb5-libs-1.21.3-6.fc42.x86_64 libacl-2.3.2-3.fc42.x86_64 libarchive-3.8.1-1.fc42.x86_64 libattr-2.5.2-5.fc42.x86_64 libblkid-2.40.4-7.fc42.x86_64 libbrotli-1.1.0-6.fc42.x86_64 libcap-2.73-2.fc42.x86_64 libcap-ng-0.8.5-4.fc42.x86_64 libcom_err-1.47.2-3.fc42.x86_64 libcurl-8.11.1-6.fc42.x86_64 libeconf-0.7.6-2.fc42.x86_64 libevent-2.1.12-15.fc42.x86_64 libfdisk-2.40.4-7.fc42.x86_64 libffi-3.4.6-5.fc42.x86_64 libgcc-15.2.1-1.fc42.x86_64 libgomp-15.2.1-1.fc42.x86_64 libidn2-2.3.8-1.fc42.x86_64 libmount-2.40.4-7.fc42.x86_64 libnghttp2-1.64.0-3.fc42.x86_64 libpkgconf-2.3.0-2.fc42.x86_64 libpsl-0.21.5-5.fc42.x86_64 libselinux-3.8-3.fc42.x86_64 libsemanage-3.8.1-2.fc42.x86_64 libsepol-3.8-1.fc42.x86_64 libsmartcols-2.40.4-7.fc42.x86_64 libssh-0.11.3-1.fc42.x86_64 libssh-config-0.11.3-1.fc42.noarch libstdc++-15.2.1-1.fc42.x86_64 libtasn1-4.20.0-1.fc42.x86_64 libtool-ltdl-2.5.4-4.fc42.x86_64 libunistring-1.1-9.fc42.x86_64 libuuid-2.40.4-7.fc42.x86_64 libverto-0.3.2-10.fc42.x86_64 libxcrypt-4.4.38-7.fc42.x86_64 libxml2-2.12.10-1.fc42.x86_64 libzstd-1.5.7-1.fc42.x86_64 lua-libs-5.4.8-1.fc42.x86_64 lua-srpm-macros-1-15.fc42.noarch lz4-libs-1.10.0-2.fc42.x86_64 mpfr-4.2.2-1.fc42.x86_64 ncurses-base-6.5-5.20250125.fc42.noarch ncurses-libs-6.5-5.20250125.fc42.x86_64 ocaml-srpm-macros-10-4.fc42.noarch openblas-srpm-macros-2-19.fc42.noarch openldap-2.6.10-1.fc42.x86_64 openssl-libs-3.2.6-2.fc42.x86_64 p11-kit-0.25.8-1.fc42.x86_64 p11-kit-trust-0.25.8-1.fc42.x86_64 package-notes-srpm-macros-0.5-13.fc42.noarch pam-libs-1.7.0-6.fc42.x86_64 patch-2.8-1.fc42.x86_64 pcre2-10.45-1.fc42.x86_64 pcre2-syntax-10.45-1.fc42.noarch perl-srpm-macros-1-57.fc42.noarch pkgconf-2.3.0-2.fc42.x86_64 pkgconf-m4-2.3.0-2.fc42.noarch pkgconf-pkg-config-2.3.0-2.fc42.x86_64 popt-1.19-8.fc42.x86_64 publicsuffix-list-dafsa-20250616-1.fc42.noarch pyproject-srpm-macros-1.18.4-1.fc42.noarch python-srpm-macros-3.13-5.fc42.noarch qt5-srpm-macros-5.15.17-1.fc42.noarch qt6-srpm-macros-6.9.2-1.fc42.noarch readline-8.2-13.fc42.x86_64 redhat-rpm-config-342-4.fc42.noarch rpm-4.20.1-1.fc42.x86_64 rpm-build-4.20.1-1.fc42.x86_64 rpm-build-libs-4.20.1-1.fc42.x86_64 rpm-libs-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 rust-srpm-macros-26.4-1.fc42.noarch sed-4.9-4.fc42.x86_64 setup-2.15.0-13.fc42.noarch shadow-utils-4.17.4-1.fc42.x86_64 sqlite-libs-3.47.2-5.fc42.x86_64 systemd-libs-257.10-1.fc42.x86_64 systemd-standalone-sysusers-257.10-1.fc42.x86_64 tar-1.35-5.fc42.x86_64 tree-sitter-srpm-macros-0.1.0-8.fc42.noarch unzip-6.0-66.fc42.x86_64 util-linux-2.40.4-7.fc42.x86_64 util-linux-core-2.40.4-7.fc42.x86_64 which-2.23-2.fc42.x86_64 xxhash-libs-0.8.3-2.fc42.x86_64 xz-5.8.1-2.fc42.x86_64 xz-libs-5.8.1-2.fc42.x86_64 zig-srpm-macros-1-4.fc42.noarch zip-3.0-43.fc42.x86_64 zlib-ng-compat-2.2.5-2.fc42.x86_64 zstd-1.5.7-1.fc42.x86_64 Start: buildsrpm Start: rpmbuild -bs Building target platforms: x86_64 Building for target x86_64 warning: Macro expanded in comment on line 111: %{_libdir}/ollama/libggml-cuda.so setting SOURCE_DATE_EPOCH=1761436800 Wrote: /builddir/build/SRPMS/ollama-0.12.6-1.fc42.src.rpm RPM build warnings: Macro expanded in comment on line 111: %{_libdir}/ollama/libggml-cuda.so Finish: rpmbuild -bs INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-42-x86_64-1761505656.433794/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names Finish: buildsrpm INFO: Done(/var/lib/copr-rpmbuild/workspace/workdir-00a19fc2/ollama/ollama.spec) Config(child) 0 minutes 18 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot INFO: Start(/var/lib/copr-rpmbuild/results/ollama-0.12.6-1.fc42.src.rpm) Config(fedora-42-x86_64) Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1761505656.433794/root. INFO: reusing tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1761505656.433794/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-1761505656.433794/root. INFO: calling preinit hooks INFO: enabled root cache Start: unpacking root cache Finish: unpacking root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 dnf5-5.2.16.0-1.fc42.x86_64 dnf5-plugins-5.2.16.0-1.fc42.x86_64 Finish: chroot init Start: build phase for ollama-0.12.6-1.fc42.src.rpm Start: build setup for ollama-0.12.6-1.fc42.src.rpm Building target platforms: x86_64 Building for target x86_64 warning: Macro expanded in comment on line 111: %{_libdir}/ollama/libggml-cuda.so setting SOURCE_DATE_EPOCH=1761436800 Wrote: /builddir/build/SRPMS/ollama-0.12.6-1.fc42.src.rpm RPM build warnings: Macro expanded in comment on line 111: %{_libdir}/ollama/libggml-cuda.so Updating and loading repositories: Copr repository 100% | 116.7 KiB/s | 1.5 KiB | 00m00s fedora 100% | 115.4 KiB/s | 32.4 KiB | 00m00s updates 100% | 161.1 KiB/s | 28.7 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: ccache x86_64 4.10.2-2.fc42 fedora 1.6 MiB cmake x86_64 3.31.6-2.fc42 fedora 34.2 MiB gcc-c++ x86_64 15.2.1-1.fc42 updates 41.3 MiB git x86_64 2.51.0-2.fc42 updates 56.4 KiB glslang x86_64 15.3.0-1.fc42 updates 3.4 MiB glslc x86_64 2025.2-1.fc42 updates 3.3 MiB golang x86_64 1.24.9-1.fc42 updates 8.9 MiB patchelf x86_64 0.18.0-8.fc42 fedora 287.6 KiB systemd x86_64 257.10-1.fc42 updates 12.1 MiB vulkan-headers noarch 1.4.313.0-1.fc42 updates 30.9 MiB vulkan-loader-devel x86_64 1.4.313.0-1.fc42 updates 8.0 KiB vulkan-tools x86_64 1.4.313.0-1.fc42 updates 1.5 MiB vulkan-validation-layers x86_64 1.4.313.0-1.fc42 updates 18.5 MiB Installing dependencies: annobin-docs noarch 12.94-1.fc42 updates 98.9 KiB annobin-plugin-gcc x86_64 12.94-1.fc42 updates 993.5 KiB authselect x86_64 1.5.1-1.fc42 fedora 153.9 KiB authselect-libs x86_64 1.5.1-1.fc42 fedora 825.0 KiB cmake-data noarch 3.31.6-2.fc42 fedora 8.5 MiB cmake-filesystem x86_64 3.31.6-2.fc42 fedora 0.0 B cmake-rpm-macros noarch 3.31.6-2.fc42 fedora 7.7 KiB cpp x86_64 15.2.1-1.fc42 updates 37.9 MiB cracklib x86_64 2.9.11-7.fc42 fedora 242.4 KiB dbus x86_64 1:1.16.0-3.fc42 fedora 0.0 B dbus-broker x86_64 36-6.fc42 updates 387.1 KiB dbus-common noarch 1:1.16.0-3.fc42 fedora 11.2 KiB emacs-filesystem noarch 1:30.0-4.fc42 fedora 0.0 B expat x86_64 2.7.2-1.fc42 updates 298.6 KiB fmt x86_64 11.1.4-1.fc42 fedora 263.9 KiB gcc x86_64 15.2.1-1.fc42 updates 111.2 MiB gcc-plugin-annobin x86_64 15.2.1-1.fc42 updates 57.1 KiB gdbm x86_64 1:1.23-9.fc42 fedora 460.3 KiB git-core x86_64 2.51.0-2.fc42 updates 23.6 MiB git-core-doc noarch 2.51.0-2.fc42 updates 17.7 MiB glibc-devel x86_64 2.41-11.fc42 updates 2.3 MiB go-filesystem x86_64 3.8.0-1.fc42 updates 0.0 B golang-bin x86_64 1.24.9-1.fc42 updates 122.1 MiB golang-src noarch 1.24.9-1.fc42 updates 79.2 MiB groff-base x86_64 1.23.0-8.fc42 fedora 3.9 MiB hiredis x86_64 1.2.0-6.fc42 fedora 105.9 KiB jsoncpp x86_64 1.9.6-1.fc42 fedora 261.6 KiB kernel-headers x86_64 6.17.4-200.fc42 updates 6.7 MiB less x86_64 679-1.fc42 updates 406.1 KiB libX11 x86_64 1.8.12-1.fc42 updates 1.3 MiB libX11-common noarch 1.8.12-1.fc42 updates 1.2 MiB libXau x86_64 1.0.12-2.fc42 fedora 76.9 KiB libb2 x86_64 0.98.1-13.fc42 fedora 46.1 KiB libcbor x86_64 0.11.0-3.fc42 fedora 77.8 KiB libedit x86_64 3.1-56.20251016cvs.fc42 updates 240.2 KiB libfido2 x86_64 1.15.0-3.fc42 fedora 242.1 KiB libmpc x86_64 1.3.1-7.fc42 fedora 164.5 KiB libnsl2 x86_64 2.0.1-3.fc42 fedora 57.9 KiB libpwquality x86_64 1.4.5-12.fc42 fedora 409.3 KiB libseccomp x86_64 2.5.5-2.fc41 fedora 173.3 KiB libstdc++-devel x86_64 15.2.1-1.fc42 updates 16.1 MiB libtirpc x86_64 1.3.7-0.fc42 updates 198.9 KiB libuv x86_64 1:1.51.0-1.fc42 updates 570.2 KiB libwayland-client x86_64 1.24.0-1.fc42 updates 62.0 KiB libxcb x86_64 1.17.0-5.fc42 fedora 1.1 MiB libxcrypt-devel x86_64 4.4.38-7.fc42 updates 30.8 KiB make x86_64 1:4.4.1-10.fc42 fedora 1.8 MiB mpdecimal x86_64 4.0.1-1.fc42 updates 217.2 KiB ncurses x86_64 6.5-5.20250125.fc42 fedora 608.1 KiB openssh x86_64 9.9p1-11.fc42 updates 1.4 MiB openssh-clients x86_64 9.9p1-11.fc42 updates 2.7 MiB pam x86_64 1.7.0-6.fc42 updates 1.6 MiB perl-AutoLoader noarch 5.74-519.fc42 updates 20.5 KiB perl-B x86_64 1.89-519.fc42 updates 498.0 KiB perl-Carp noarch 1.54-512.fc42 fedora 46.6 KiB perl-Class-Struct noarch 0.68-519.fc42 updates 25.4 KiB perl-Data-Dumper x86_64 2.189-513.fc42 fedora 115.6 KiB perl-Digest noarch 1.20-512.fc42 fedora 35.3 KiB perl-Digest-MD5 x86_64 2.59-6.fc42 fedora 59.7 KiB perl-DynaLoader x86_64 1.56-519.fc42 updates 32.1 KiB perl-Encode x86_64 4:3.21-512.fc42 fedora 4.7 MiB perl-Errno x86_64 1.38-519.fc42 updates 8.3 KiB perl-Error noarch 1:0.17030-1.fc42 fedora 76.7 KiB perl-Exporter noarch 5.78-512.fc42 fedora 54.3 KiB perl-Fcntl x86_64 1.18-519.fc42 updates 48.9 KiB perl-File-Basename noarch 2.86-519.fc42 updates 14.0 KiB perl-File-Path noarch 2.18-512.fc42 fedora 63.5 KiB perl-File-Temp noarch 1:0.231.100-512.fc42 fedora 162.3 KiB perl-File-stat noarch 1.14-519.fc42 updates 12.5 KiB perl-FileHandle noarch 2.05-519.fc42 updates 9.3 KiB perl-Getopt-Long noarch 1:2.58-3.fc42 fedora 144.5 KiB perl-Getopt-Std noarch 1.14-519.fc42 updates 11.2 KiB perl-Git noarch 2.51.0-2.fc42 updates 64.4 KiB perl-HTTP-Tiny noarch 0.090-2.fc42 fedora 154.4 KiB perl-IO x86_64 1.55-519.fc42 updates 147.0 KiB perl-IO-Socket-IP noarch 0.43-2.fc42 fedora 100.3 KiB perl-IO-Socket-SSL noarch 2.089-2.fc42 fedora 703.3 KiB perl-IPC-Open3 noarch 1.22-519.fc42 updates 22.5 KiB perl-MIME-Base32 noarch 1.303-23.fc42 fedora 30.7 KiB perl-MIME-Base64 x86_64 3.16-512.fc42 fedora 42.0 KiB perl-Net-SSLeay x86_64 1.94-8.fc42 fedora 1.3 MiB perl-POSIX x86_64 2.20-519.fc42 updates 231.0 KiB perl-PathTools x86_64 3.91-513.fc42 fedora 180.0 KiB perl-Pod-Escapes noarch 1:1.07-512.fc42 fedora 24.9 KiB perl-Pod-Perldoc noarch 3.28.01-513.fc42 fedora 163.7 KiB perl-Pod-Simple noarch 1:3.45-512.fc42 fedora 560.8 KiB perl-Pod-Usage noarch 4:2.05-1.fc42 fedora 86.3 KiB perl-Scalar-List-Utils x86_64 5:1.70-1.fc42 updates 144.9 KiB perl-SelectSaver noarch 1.02-519.fc42 updates 2.2 KiB perl-Socket x86_64 4:2.038-512.fc42 fedora 119.9 KiB perl-Storable x86_64 1:3.32-512.fc42 fedora 232.3 KiB perl-Symbol noarch 1.09-519.fc42 updates 6.8 KiB perl-Term-ANSIColor noarch 5.01-513.fc42 fedora 97.5 KiB perl-Term-Cap noarch 1.18-512.fc42 fedora 29.3 KiB perl-TermReadKey x86_64 2.38-24.fc42 fedora 64.0 KiB perl-Text-ParseWords noarch 3.31-512.fc42 fedora 13.6 KiB perl-Text-Tabs+Wrap noarch 2024.001-512.fc42 fedora 22.6 KiB perl-Time-Local noarch 2:1.350-512.fc42 fedora 68.9 KiB perl-URI noarch 5.31-2.fc42 fedora 257.0 KiB perl-base noarch 2.27-519.fc42 updates 12.5 KiB perl-constant noarch 1.33-513.fc42 fedora 26.2 KiB perl-if noarch 0.61.000-519.fc42 updates 5.8 KiB perl-interpreter x86_64 4:5.40.3-519.fc42 updates 118.4 KiB perl-lib x86_64 0.65-519.fc42 updates 8.5 KiB perl-libnet noarch 3.15-513.fc42 fedora 289.4 KiB perl-libs x86_64 4:5.40.3-519.fc42 updates 9.8 MiB perl-locale noarch 1.12-519.fc42 updates 6.5 KiB perl-mro x86_64 1.29-519.fc42 updates 41.5 KiB perl-overload noarch 1.37-519.fc42 updates 71.5 KiB perl-overloading noarch 0.02-519.fc42 updates 4.8 KiB perl-parent noarch 1:0.244-2.fc42 fedora 10.3 KiB perl-podlators noarch 1:6.0.2-3.fc42 fedora 317.5 KiB perl-vars noarch 1.05-519.fc42 updates 3.9 KiB python-pip-wheel noarch 24.3.1-5.fc42 updates 1.2 MiB python3 x86_64 3.13.9-1.fc42 updates 28.7 KiB python3-libs x86_64 3.13.9-1.fc42 updates 40.1 MiB rhash x86_64 1.4.5-2.fc42 fedora 351.0 KiB spirv-tools-libs x86_64 2025.2-2.fc42 updates 5.8 MiB systemd-pam x86_64 257.10-1.fc42 updates 1.1 MiB systemd-rpm-macros noarch 257.10-1.fc42 updates 10.7 KiB systemd-shared x86_64 257.10-1.fc42 updates 4.6 MiB tzdata noarch 2025b-1.fc42 fedora 1.6 MiB vim-filesystem noarch 2:9.1.1818-1.fc42 updates 40.0 B vulkan-loader x86_64 1.4.313.0-1.fc42 updates 532.4 KiB Transaction Summary: Installing: 137 packages Total size of inbound packages is 182 MiB. Need to download 182 MiB. After this operation, 681 MiB extra will be used (install 681 MiB, remove 0 B). [ 1/137] patchelf-0:0.18.0-8.fc42.x86_ 100% | 7.8 MiB/s | 127.6 KiB | 00m00s [ 2/137] ccache-0:4.10.2-2.fc42.x86_64 100% | 36.8 MiB/s | 679.0 KiB | 00m00s [ 3/137] git-0:2.51.0-2.fc42.x86_64 100% | 10.0 MiB/s | 40.8 KiB | 00m00s [ 4/137] glslang-0:15.3.0-1.fc42.x86_6 100% | 100.0 MiB/s | 1.1 MiB | 00m00s [ 5/137] glslc-0:2025.2-1.fc42.x86_64 100% | 110.7 MiB/s | 1.1 MiB | 00m00s [ 6/137] cmake-0:3.31.6-2.fc42.x86_64 100% | 196.5 MiB/s | 12.2 MiB | 00m00s [ 7/137] golang-0:1.24.9-1.fc42.x86_64 100% | 12.4 MiB/s | 670.9 KiB | 00m00s [ 8/137] systemd-0:257.10-1.fc42.x86_6 100% | 98.3 MiB/s | 4.0 MiB | 00m00s [ 9/137] vulkan-loader-devel-0:1.4.313 100% | 1.1 MiB/s | 12.2 KiB | 00m00s [ 10/137] vulkan-tools-0:1.4.313.0-1.fc 100% | 60.8 MiB/s | 373.4 KiB | 00m00s [ 11/137] gcc-c++-0:15.2.1-1.fc42.x86_6 100% | 128.2 MiB/s | 15.3 MiB | 00m00s [ 12/137] vulkan-headers-0:1.4.313.0-1. 100% | 38.0 MiB/s | 1.4 MiB | 00m00s [ 13/137] hiredis-0:1.2.0-6.fc42.x86_64 100% | 6.2 MiB/s | 50.7 KiB | 00m00s [ 14/137] fmt-0:11.1.4-1.fc42.x86_64 100% | 6.1 MiB/s | 99.8 KiB | 00m00s [ 15/137] cmake-filesystem-0:3.31.6-2.f 100% | 2.1 MiB/s | 17.6 KiB | 00m00s [ 16/137] vulkan-validation-layers-0:1. 100% | 81.4 MiB/s | 4.0 MiB | 00m00s [ 17/137] cmake-data-0:3.31.6-2.fc42.no 100% | 91.5 MiB/s | 2.5 MiB | 00m00s [ 18/137] jsoncpp-0:1.9.6-1.fc42.x86_64 100% | 7.8 MiB/s | 103.5 KiB | 00m00s [ 19/137] make-1:4.4.1-10.fc42.x86_64 100% | 114.6 MiB/s | 587.0 KiB | 00m00s [ 20/137] rhash-0:1.4.5-2.fc42.x86_64 100% | 27.7 MiB/s | 198.7 KiB | 00m00s [ 21/137] libmpc-0:1.3.1-7.fc42.x86_64 100% | 9.9 MiB/s | 70.9 KiB | 00m00s [ 22/137] perl-Getopt-Long-1:2.58-3.fc4 100% | 15.6 MiB/s | 63.7 KiB | 00m00s [ 23/137] perl-PathTools-0:3.91-513.fc4 100% | 17.1 MiB/s | 87.3 KiB | 00m00s [ 24/137] perl-TermReadKey-0:2.38-24.fc 100% | 11.5 MiB/s | 35.4 KiB | 00m00s [ 25/137] git-core-doc-0:2.51.0-2.fc42. 100% | 70.4 MiB/s | 3.0 MiB | 00m00s [ 26/137] git-core-0:2.51.0-2.fc42.x86_ 100% | 94.4 MiB/s | 5.0 MiB | 00m00s [ 27/137] perl-Git-0:2.51.0-2.fc42.noar 100% | 1.7 MiB/s | 37.9 KiB | 00m00s [ 28/137] golang-src-0:1.24.9-1.fc42.no 100% | 101.7 MiB/s | 13.1 MiB | 00m00s [ 29/137] dbus-1:1.16.0-3.fc42.x86_64 100% | 554.2 KiB/s | 7.8 KiB | 00m00s [ 30/137] libseccomp-0:2.5.5-2.fc41.x86 100% | 4.3 MiB/s | 70.2 KiB | 00m00s [ 31/137] systemd-pam-0:257.10-1.fc42.x 100% | 30.8 MiB/s | 410.5 KiB | 00m00s [ 32/137] systemd-shared-0:257.10-1.fc4 100% | 68.0 MiB/s | 1.8 MiB | 00m00s [ 33/137] vulkan-loader-0:1.4.313.0-1.f 100% | 18.5 MiB/s | 151.5 KiB | 00m00s [ 34/137] golang-bin-0:1.24.9-1.fc42.x8 100% | 114.0 MiB/s | 29.4 MiB | 00m00s [ 35/137] gcc-0:15.2.1-1.fc42.x86_64 100% | 110.2 MiB/s | 39.4 MiB | 00m00s [ 36/137] emacs-filesystem-1:30.0-4.fc4 100% | 204.3 KiB/s | 7.4 KiB | 00m00s [ 37/137] libxcb-0:1.17.0-5.fc42.x86_64 100% | 3.4 MiB/s | 239.0 KiB | 00m00s [ 38/137] perl-Exporter-0:5.78-512.fc42 100% | 10.1 MiB/s | 31.0 KiB | 00m00s [ 39/137] perl-Pod-Usage-4:2.05-1.fc42. 100% | 13.2 MiB/s | 40.5 KiB | 00m00s [ 40/137] perl-Text-ParseWords-0:3.31-5 100% | 2.3 MiB/s | 16.5 KiB | 00m00s [ 41/137] perl-constant-0:1.33-513.fc42 100% | 2.5 MiB/s | 23.0 KiB | 00m00s [ 42/137] perl-Carp-0:1.54-512.fc42.noa 100% | 4.7 MiB/s | 28.9 KiB | 00m00s [ 43/137] perl-Error-1:0.17030-1.fc42.n 100% | 4.9 MiB/s | 40.4 KiB | 00m00s [ 44/137] perl-Pod-Perldoc-0:3.28.01-51 100% | 20.9 MiB/s | 85.8 KiB | 00m00s [ 45/137] perl-podlators-1:6.0.2-3.fc42 100% | 25.1 MiB/s | 128.6 KiB | 00m00s [ 46/137] groff-base-0:1.23.0-8.fc42.x8 100% | 92.0 MiB/s | 1.1 MiB | 00m00s [ 47/137] libXau-0:1.0.12-2.fc42.x86_64 100% | 1.2 MiB/s | 33.6 KiB | 00m00s [ 48/137] perl-File-Temp-1:0.231.100-51 100% | 11.6 MiB/s | 59.2 KiB | 00m00s [ 49/137] perl-HTTP-Tiny-0:0.090-2.fc42 100% | 13.8 MiB/s | 56.5 KiB | 00m00s [ 50/137] perl-parent-1:0.244-2.fc42.no 100% | 3.7 MiB/s | 15.2 KiB | 00m00s [ 51/137] perl-Pod-Simple-1:3.45-512.fc 100% | 35.6 MiB/s | 219.0 KiB | 00m00s [ 52/137] perl-Term-ANSIColor-0:5.01-51 100% | 9.3 MiB/s | 47.7 KiB | 00m00s [ 53/137] cpp-0:15.2.1-1.fc42.x86_64 100% | 174.7 MiB/s | 12.9 MiB | 00m00s [ 54/137] perl-Term-Cap-0:1.18-512.fc42 100% | 1.2 MiB/s | 22.2 KiB | 00m00s [ 55/137] perl-File-Path-0:2.18-512.fc4 100% | 2.6 MiB/s | 35.2 KiB | 00m00s [ 56/137] perl-MIME-Base64-0:3.16-512.f 100% | 7.3 MiB/s | 29.9 KiB | 00m00s [ 57/137] perl-IO-Socket-SSL-0:2.089-2. 100% | 37.5 MiB/s | 230.2 KiB | 00m00s [ 58/137] perl-Net-SSLeay-0:1.94-8.fc42 100% | 61.2 MiB/s | 376.0 KiB | 00m00s [ 59/137] perl-Socket-4:2.038-512.fc42. 100% | 13.4 MiB/s | 54.8 KiB | 00m00s [ 60/137] perl-Time-Local-2:1.350-512.f 100% | 11.2 MiB/s | 34.5 KiB | 00m00s [ 61/137] perl-Pod-Escapes-1:1.07-512.f 100% | 6.5 MiB/s | 19.8 KiB | 00m00s [ 62/137] perl-Text-Tabs+Wrap-0:2024.00 100% | 5.3 MiB/s | 21.8 KiB | 00m00s [ 63/137] ncurses-0:6.5-5.20250125.fc42 100% | 82.9 MiB/s | 424.5 KiB | 00m00s [ 64/137] perl-IO-Socket-IP-0:0.43-2.fc 100% | 8.3 MiB/s | 42.4 KiB | 00m00s [ 65/137] perl-URI-0:5.31-2.fc42.noarch 100% | 34.4 MiB/s | 140.7 KiB | 00m00s [ 66/137] perl-MIME-Base32-0:1.303-23.f 100% | 6.7 MiB/s | 20.5 KiB | 00m00s [ 67/137] perl-Data-Dumper-0:2.189-513. 100% | 13.8 MiB/s | 56.7 KiB | 00m00s [ 68/137] perl-libnet-0:3.15-513.fc42.n 100% | 31.3 MiB/s | 128.4 KiB | 00m00s [ 69/137] perl-Digest-MD5-0:2.59-6.fc42 100% | 11.7 MiB/s | 36.0 KiB | 00m00s [ 70/137] perl-Digest-0:1.20-512.fc42.n 100% | 8.1 MiB/s | 24.9 KiB | 00m00s [ 71/137] libX11-0:1.8.12-1.fc42.x86_64 100% | 91.4 MiB/s | 655.4 KiB | 00m00s [ 72/137] libX11-common-0:1.8.12-1.fc42 100% | 24.5 MiB/s | 175.9 KiB | 00m00s [ 73/137] spirv-tools-libs-0:2025.2-2.f 100% | 109.7 MiB/s | 1.5 MiB | 00m00s [ 74/137] libwayland-client-0:1.24.0-1. 100% | 4.7 MiB/s | 33.6 KiB | 00m00s [ 75/137] python3-0:3.13.9-1.fc42.x86_6 100% | 3.8 MiB/s | 30.8 KiB | 00m00s [ 76/137] libb2-0:0.98.1-13.fc42.x86_64 100% | 2.8 MiB/s | 25.4 KiB | 00m00s [ 77/137] dbus-broker-0:36-6.fc42.x86_6 100% | 16.8 MiB/s | 172.4 KiB | 00m00s [ 78/137] dbus-common-1:1.16.0-3.fc42.n 100% | 1.8 MiB/s | 14.5 KiB | 00m00s [ 79/137] expat-0:2.7.2-1.fc42.x86_64 100% | 19.4 MiB/s | 119.0 KiB | 00m00s [ 80/137] mpdecimal-0:4.0.1-1.fc42.x86_ 100% | 7.9 MiB/s | 97.1 KiB | 00m00s [ 81/137] python-pip-wheel-0:24.3.1-5.f 100% | 92.6 MiB/s | 1.2 MiB | 00m00s [ 82/137] python3-libs-0:3.13.9-1.fc42. 100% | 131.6 MiB/s | 9.2 MiB | 00m00s [ 83/137] perl-interpreter-4:5.40.3-519 100% | 10.0 MiB/s | 72.0 KiB | 00m00s [ 84/137] perl-libs-4:5.40.3-519.fc42.x 100% | 106.0 MiB/s | 2.3 MiB | 00m00s [ 85/137] perl-Errno-0:1.38-519.fc42.x8 100% | 2.9 MiB/s | 14.8 KiB | 00m00s [ 86/137] go-filesystem-0:3.8.0-1.fc42. 100% | 885.2 KiB/s | 8.9 KiB | 00m00s [ 87/137] less-0:679-1.fc42.x86_64 100% | 17.3 MiB/s | 195.3 KiB | 00m00s [ 88/137] tzdata-0:2025b-1.fc42.noarch 100% | 7.4 MiB/s | 714.0 KiB | 00m00s [ 89/137] libfido2-0:1.15.0-3.fc42.x86_ 100% | 24.0 MiB/s | 98.4 KiB | 00m00s [ 90/137] openssh-clients-0:9.9p1-11.fc 100% | 93.6 MiB/s | 767.0 KiB | 00m00s [ 91/137] openssh-0:9.9p1-11.fc42.x86_6 100% | 49.3 MiB/s | 353.6 KiB | 00m00s [ 92/137] libcbor-0:0.11.0-3.fc42.x86_6 100% | 5.4 MiB/s | 33.3 KiB | 00m00s [ 93/137] perl-File-Basename-0:2.86-519 100% | 3.3 MiB/s | 17.0 KiB | 00m00s [ 94/137] perl-IPC-Open3-0:1.22-519.fc4 100% | 4.2 MiB/s | 21.7 KiB | 00m00s [ 95/137] perl-lib-0:0.65-519.fc42.x86_ 100% | 4.8 MiB/s | 14.8 KiB | 00m00s [ 96/137] glibc-devel-0:2.41-11.fc42.x8 100% | 101.4 MiB/s | 623.2 KiB | 00m00s [ 97/137] perl-Encode-4:3.21-512.fc42.x 100% | 116.9 MiB/s | 1.1 MiB | 00m00s [ 98/137] perl-Storable-1:3.32-512.fc42 100% | 12.2 MiB/s | 99.6 KiB | 00m00s [ 99/137] libstdc++-devel-0:15.2.1-1.fc 100% | 136.4 MiB/s | 2.9 MiB | 00m00s [100/137] perl-POSIX-0:2.20-519.fc42.x8 100% | 9.5 MiB/s | 97.4 KiB | 00m00s [101/137] perl-Fcntl-0:1.18-519.fc42.x8 100% | 4.8 MiB/s | 29.7 KiB | 00m00s [102/137] perl-FileHandle-0:2.05-519.fc 100% | 3.0 MiB/s | 15.3 KiB | 00m00s [103/137] perl-Symbol-0:1.09-519.fc42.n 100% | 1.7 MiB/s | 14.0 KiB | 00m00s [104/137] perl-IO-0:1.55-519.fc42.x86_6 100% | 8.9 MiB/s | 81.6 KiB | 00m00s [105/137] perl-base-0:2.27-519.fc42.noa 100% | 2.0 MiB/s | 16.0 KiB | 00m00s [106/137] perl-overload-0:1.37-519.fc42 100% | 6.3 MiB/s | 45.4 KiB | 00m00s [107/137] perl-Scalar-List-Utils-5:1.70 100% | 6.1 MiB/s | 74.6 KiB | 00m00s [108/137] perl-DynaLoader-0:1.56-519.fc 100% | 12.6 MiB/s | 25.9 KiB | 00m00s [109/137] perl-vars-0:1.05-519.fc42.noa 100% | 6.3 MiB/s | 12.8 KiB | 00m00s [110/137] perl-if-0:0.61.000-519.fc42.n 100% | 6.8 MiB/s | 13.8 KiB | 00m00s [111/137] perl-AutoLoader-0:5.74-519.fc 100% | 10.3 MiB/s | 21.1 KiB | 00m00s [112/137] perl-Getopt-Std-0:1.14-519.fc 100% | 7.6 MiB/s | 15.5 KiB | 00m00s [113/137] perl-B-0:1.89-519.fc42.x86_64 100% | 34.5 MiB/s | 176.5 KiB | 00m00s [114/137] vim-filesystem-2:9.1.1818-1.f 100% | 3.8 MiB/s | 15.5 KiB | 00m00s [115/137] libuv-1:1.51.0-1.fc42.x86_64 100% | 32.5 MiB/s | 266.3 KiB | 00m00s [116/137] perl-overloading-0:0.02-519.f 100% | 3.1 MiB/s | 12.7 KiB | 00m00s [117/137] perl-mro-0:1.29-519.fc42.x86_ 100% | 3.2 MiB/s | 29.7 KiB | 00m00s [118/137] perl-locale-0:1.12-519.fc42.n 100% | 3.3 MiB/s | 13.5 KiB | 00m00s [119/137] perl-File-stat-0:1.14-519.fc4 100% | 4.1 MiB/s | 16.9 KiB | 00m00s [120/137] perl-Class-Struct-0:0.68-519. 100% | 7.1 MiB/s | 21.9 KiB | 00m00s [121/137] perl-SelectSaver-0:1.02-519.f 100% | 2.8 MiB/s | 11.6 KiB | 00m00s [122/137] libedit-0:3.1-56.20251016cvs. 100% | 20.5 MiB/s | 105.1 KiB | 00m00s [123/137] libxcrypt-devel-0:4.4.38-7.fc 100% | 2.9 MiB/s | 29.4 KiB | 00m00s [124/137] gcc-plugin-annobin-0:15.2.1-1 100% | 3.9 MiB/s | 55.8 KiB | 00m00s [125/137] systemd-rpm-macros-0:257.10-1 100% | 2.7 MiB/s | 33.0 KiB | 00m00s [126/137] kernel-headers-0:6.17.4-200.f 100% | 58.5 MiB/s | 1.7 MiB | 00m00s [127/137] cmake-rpm-macros-0:3.31.6-2.f 100% | 1.3 MiB/s | 16.9 KiB | 00m00s [128/137] authselect-libs-0:1.5.1-1.fc4 100% | 30.4 MiB/s | 217.9 KiB | 00m00s [129/137] pam-0:1.7.0-6.fc42.x86_64 100% | 108.7 MiB/s | 556.6 KiB | 00m00s [130/137] authselect-0:1.5.1-1.fc42.x86 100% | 28.6 MiB/s | 146.5 KiB | 00m00s [131/137] libnsl2-0:2.0.1-3.fc42.x86_64 100% | 4.8 MiB/s | 29.5 KiB | 00m00s [132/137] libpwquality-0:1.4.5-12.fc42. 100% | 19.3 MiB/s | 118.5 KiB | 00m00s [133/137] cracklib-0:2.9.11-7.fc42.x86_ 100% | 14.9 MiB/s | 91.6 KiB | 00m00s [134/137] gdbm-1:1.23-9.fc42.x86_64 100% | 36.8 MiB/s | 150.8 KiB | 00m00s [135/137] libtirpc-0:1.3.7-0.fc42.x86_6 100% | 23.0 MiB/s | 94.2 KiB | 00m00s [136/137] annobin-plugin-gcc-0:12.94-1. 100% | 137.0 MiB/s | 981.9 KiB | 00m00s [137/137] annobin-docs-0:12.94-1.fc42.n 100% | 6.8 MiB/s | 90.4 KiB | 00m00s -------------------------------------------------------------------------------- [137/137] Total 100% | 206.3 MiB/s | 181.7 MiB | 00m01s Running transaction [ 1/139] Verify package files 100% | 254.0 B/s | 137.0 B | 00m01s [ 2/139] Prepare transaction 100% | 1.3 KiB/s | 137.0 B | 00m00s [ 3/139] Installing expat-0:2.7.2-1.fc 100% | 21.0 MiB/s | 300.7 KiB | 00m00s [ 4/139] Installing cmake-filesystem-0 100% | 7.4 MiB/s | 7.6 KiB | 00m00s [ 5/139] Installing spirv-tools-libs-0 100% | 413.0 MiB/s | 5.8 MiB | 00m00s [ 6/139] Installing libmpc-0:1.3.1-7.f 100% | 162.2 MiB/s | 166.1 KiB | 00m00s [ 7/139] Installing libtirpc-0:1.3.7-0 100% | 196.0 MiB/s | 200.7 KiB | 00m00s [ 8/139] Installing libseccomp-0:2.5.5 100% | 171.1 MiB/s | 175.2 KiB | 00m00s [ 9/139] Installing make-1:4.4.1-10.fc 100% | 105.9 MiB/s | 1.8 MiB | 00m00s [ 10/139] Installing systemd-shared-0:2 100% | 386.9 MiB/s | 4.6 MiB | 00m00s [ 11/139] Installing libnsl2-0:2.0.1-3. 100% | 57.6 MiB/s | 59.0 KiB | 00m00s [ 12/139] Installing cpp-0:15.2.1-1.fc4 100% | 357.9 MiB/s | 37.9 MiB | 00m00s [ 13/139] Installing annobin-docs-0:12. 100% | 0.0 B/s | 100.0 KiB | 00m00s [ 14/139] Installing gdbm-1:1.23-9.fc42 100% | 32.4 MiB/s | 465.2 KiB | 00m00s [ 15/139] Installing cracklib-0:2.9.11- 100% | 16.5 MiB/s | 253.7 KiB | 00m00s [ 16/139] Installing libpwquality-0:1.4 100% | 27.4 MiB/s | 421.6 KiB | 00m00s [ 17/139] Installing authselect-libs-0: 100% | 136.7 MiB/s | 840.0 KiB | 00m00s [ 18/139] Installing kernel-headers-0:6 100% | 237.1 MiB/s | 6.9 MiB | 00m00s [ 19/139] Installing libxcrypt-devel-0: 100% | 16.2 MiB/s | 33.1 KiB | 00m00s [ 20/139] Installing glibc-devel-0:2.41 100% | 179.4 MiB/s | 2.3 MiB | 00m00s [ 21/139] Installing gcc-0:15.2.1-1.fc4 100% | 418.4 MiB/s | 111.3 MiB | 00m00s [ 22/139] Installing libedit-0:3.1-56.2 100% | 236.2 MiB/s | 241.9 KiB | 00m00s [ 23/139] Installing libuv-1:1.51.0-1.f 100% | 279.8 MiB/s | 573.0 KiB | 00m00s [ 24/139] Installing vim-filesystem-2:9 100% | 2.3 MiB/s | 4.7 KiB | 00m00s [ 25/139] Installing libstdc++-devel-0: 100% | 405.5 MiB/s | 16.2 MiB | 00m00s [ 26/139] Installing libcbor-0:0.11.0-3 100% | 77.3 MiB/s | 79.2 KiB | 00m00s [ 27/139] Installing libfido2-0:1.15.0- 100% | 237.9 MiB/s | 243.6 KiB | 00m00s [ 28/139] Installing openssh-0:9.9p1-11 100% | 86.3 MiB/s | 1.4 MiB | 00m00s [ 29/139] Installing openssh-clients-0: 100% | 112.7 MiB/s | 2.7 MiB | 00m00s [ 30/139] Installing less-0:679-1.fc42. 100% | 28.6 MiB/s | 409.4 KiB | 00m00s [ 31/139] Installing git-core-0:2.51.0- 100% | 363.9 MiB/s | 23.7 MiB | 00m00s [ 32/139] Installing git-core-doc-0:2.5 100% | 388.9 MiB/s | 17.9 MiB | 00m00s [ 33/139] Installing go-filesystem-0:3. 100% | 0.0 B/s | 392.0 B | 00m00s [ 34/139] Installing python-pip-wheel-0 100% | 622.2 MiB/s | 1.2 MiB | 00m00s [ 35/139] Installing mpdecimal-0:4.0.1- 100% | 30.5 MiB/s | 218.8 KiB | 00m00s >>> Running sysusers scriptlet: dbus-common-1:1.16.0-3.fc42.noarch >>> Finished sysusers scriptlet: dbus-common-1:1.16.0-3.fc42.noarch >>> Scriptlet output: >>> Creating group 'dbus' with GID 81. >>> Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. >>> [ 36/139] Installing dbus-common-1:1.16 100% | 1.3 MiB/s | 13.6 KiB | 00m00s [ 37/139] Installing dbus-broker-0:36-6 100% | 20.0 MiB/s | 389.6 KiB | 00m00s [ 38/139] Installing dbus-1:1.16.0-3.fc 100% | 0.0 B/s | 124.0 B | 00m00s [ 39/139] Installing systemd-pam-0:257. 100% | 68.8 MiB/s | 1.1 MiB | 00m00s >>> Running sysusers scriptlet: systemd-0:257.10-1.fc42.x86_64 >>> Finished sysusers scriptlet: systemd-0:257.10-1.fc42.x86_64 >>> Scriptlet output: >>> Creating group 'systemd-journal' with GID 190. >>> >>> Running sysusers scriptlet: systemd-0:257.10-1.fc42.x86_64 >>> Finished sysusers scriptlet: systemd-0:257.10-1.fc42.x86_64 >>> Scriptlet output: >>> Creating group 'systemd-oom' with GID 999. >>> Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 999 and >>> [ 40/139] Installing systemd-0:257.10-1 100% | 98.1 MiB/s | 12.3 MiB | 00m00s [ 41/139] Installing tzdata-0:2025b-1.f 100% | 67.6 MiB/s | 1.9 MiB | 00m00s [ 42/139] Installing libb2-0:0.98.1-13. 100% | 9.2 MiB/s | 47.2 KiB | 00m00s [ 43/139] Installing python3-libs-0:3.1 100% | 355.1 MiB/s | 40.5 MiB | 00m00s [ 44/139] Installing python3-0:3.13.9-1 100% | 2.3 MiB/s | 30.5 KiB | 00m00s [ 45/139] Installing cmake-rpm-macros-0 100% | 0.0 B/s | 8.3 KiB | 00m00s [ 46/139] Installing vulkan-headers-0:1 100% | 772.6 MiB/s | 30.9 MiB | 00m00s [ 47/139] Installing libwayland-client- 100% | 61.7 MiB/s | 63.2 KiB | 00m00s [ 48/139] Installing libX11-common-0:1. 100% | 169.7 MiB/s | 1.2 MiB | 00m00s [ 49/139] Installing ncurses-0:6.5-5.20 100% | 28.6 MiB/s | 614.7 KiB | 00m00s [ 50/139] Installing groff-base-0:1.23. 100% | 125.6 MiB/s | 3.9 MiB | 00m00s [ 51/139] Installing perl-Digest-0:1.20 100% | 36.2 MiB/s | 37.1 KiB | 00m00s [ 52/139] Installing perl-Digest-MD5-0: 100% | 60.1 MiB/s | 61.6 KiB | 00m00s [ 53/139] Installing perl-B-0:1.89-519. 100% | 244.8 MiB/s | 501.3 KiB | 00m00s [ 54/139] Installing perl-FileHandle-0: 100% | 0.0 B/s | 9.8 KiB | 00m00s [ 55/139] Installing perl-MIME-Base32-0 100% | 31.4 MiB/s | 32.2 KiB | 00m00s [ 56/139] Installing perl-Data-Dumper-0 100% | 114.7 MiB/s | 117.5 KiB | 00m00s [ 57/139] Installing perl-libnet-0:3.15 100% | 143.9 MiB/s | 294.7 KiB | 00m00s [ 58/139] Installing perl-AutoLoader-0: 100% | 0.0 B/s | 20.9 KiB | 00m00s [ 59/139] Installing perl-IO-Socket-IP- 100% | 99.8 MiB/s | 102.2 KiB | 00m00s [ 60/139] Installing perl-URI-0:5.31-2. 100% | 131.7 MiB/s | 269.6 KiB | 00m00s [ 61/139] Installing perl-Time-Local-2: 100% | 0.0 B/s | 70.6 KiB | 00m00s [ 62/139] Installing perl-Text-Tabs+Wra 100% | 0.0 B/s | 23.9 KiB | 00m00s [ 63/139] Installing perl-File-Path-0:2 100% | 0.0 B/s | 64.5 KiB | 00m00s [ 64/139] Installing perl-Pod-Escapes-1 100% | 0.0 B/s | 25.9 KiB | 00m00s [ 65/139] Installing perl-if-0:0.61.000 100% | 0.0 B/s | 6.2 KiB | 00m00s [ 66/139] Installing perl-Net-SSLeay-0: 100% | 271.7 MiB/s | 1.4 MiB | 00m00s [ 67/139] Installing perl-locale-0:1.12 100% | 0.0 B/s | 6.9 KiB | 00m00s [ 68/139] Installing perl-IO-Socket-SSL 100% | 345.4 MiB/s | 707.4 KiB | 00m00s [ 69/139] Installing perl-Term-ANSIColo 100% | 96.9 MiB/s | 99.2 KiB | 00m00s [ 70/139] Installing perl-Term-Cap-0:1. 100% | 0.0 B/s | 30.6 KiB | 00m00s [ 71/139] Installing perl-Pod-Simple-1: 100% | 278.5 MiB/s | 570.4 KiB | 00m00s [ 72/139] Installing perl-POSIX-0:2.20- 100% | 226.9 MiB/s | 232.3 KiB | 00m00s [ 73/139] Installing perl-File-Temp-1:0 100% | 160.2 MiB/s | 164.1 KiB | 00m00s [ 74/139] Installing perl-IPC-Open3-0:1 100% | 0.0 B/s | 23.3 KiB | 00m00s [ 75/139] Installing perl-HTTP-Tiny-0:0 100% | 152.8 MiB/s | 156.4 KiB | 00m00s [ 76/139] Installing perl-Class-Struct- 100% | 0.0 B/s | 25.9 KiB | 00m00s [ 77/139] Installing perl-Socket-4:2.03 100% | 119.1 MiB/s | 122.0 KiB | 00m00s [ 78/139] Installing perl-Symbol-0:1.09 100% | 0.0 B/s | 7.2 KiB | 00m00s [ 79/139] Installing perl-SelectSaver-0 100% | 0.0 B/s | 2.6 KiB | 00m00s [ 80/139] Installing perl-podlators-1:6 100% | 22.4 MiB/s | 321.4 KiB | 00m00s [ 81/139] Installing perl-Pod-Perldoc-0 100% | 12.7 MiB/s | 169.2 KiB | 00m00s [ 82/139] Installing perl-File-stat-0:1 100% | 0.0 B/s | 13.1 KiB | 00m00s [ 83/139] Installing perl-Text-ParseWor 100% | 0.0 B/s | 14.6 KiB | 00m00s [ 84/139] Installing perl-Fcntl-0:1.18- 100% | 0.0 B/s | 50.0 KiB | 00m00s [ 85/139] Installing perl-base-0:2.27-5 100% | 0.0 B/s | 12.9 KiB | 00m00s [ 86/139] Installing perl-mro-0:1.29-51 100% | 0.0 B/s | 42.6 KiB | 00m00s [ 87/139] Installing perl-overloading-0 100% | 0.0 B/s | 5.5 KiB | 00m00s [ 88/139] Installing perl-Pod-Usage-4:2 100% | 7.2 MiB/s | 87.9 KiB | 00m00s [ 89/139] Installing perl-IO-0:1.55-519 100% | 147.7 MiB/s | 151.3 KiB | 00m00s [ 90/139] Installing perl-constant-0:1. 100% | 0.0 B/s | 27.4 KiB | 00m00s [ 91/139] Installing perl-parent-1:0.24 100% | 0.0 B/s | 11.0 KiB | 00m00s [ 92/139] Installing perl-MIME-Base64-0 100% | 43.2 MiB/s | 44.3 KiB | 00m00s [ 93/139] Installing perl-Errno-0:1.38- 100% | 0.0 B/s | 8.7 KiB | 00m00s [ 94/139] Installing perl-File-Basename 100% | 0.0 B/s | 14.6 KiB | 00m00s [ 95/139] Installing perl-Scalar-List-U 100% | 145.2 MiB/s | 148.6 KiB | 00m00s [ 96/139] Installing perl-vars-0:1.05-5 100% | 0.0 B/s | 4.3 KiB | 00m00s [ 97/139] Installing perl-Getopt-Std-0: 100% | 0.0 B/s | 11.7 KiB | 00m00s [ 98/139] Installing perl-overload-0:1. 100% | 0.0 B/s | 71.9 KiB | 00m00s [ 99/139] Installing perl-Storable-1:3. 100% | 228.4 MiB/s | 233.9 KiB | 00m00s [100/139] Installing perl-Getopt-Long-1 100% | 143.8 MiB/s | 147.2 KiB | 00m00s [101/139] Installing perl-Exporter-0:5. 100% | 0.0 B/s | 55.6 KiB | 00m00s [102/139] Installing perl-Carp-0:1.54-5 100% | 0.0 B/s | 47.7 KiB | 00m00s [103/139] Installing perl-PathTools-0:3 100% | 180.2 MiB/s | 184.5 KiB | 00m00s [104/139] Installing perl-DynaLoader-0: 100% | 0.0 B/s | 32.5 KiB | 00m00s [105/139] Installing perl-Encode-4:3.21 100% | 187.8 MiB/s | 4.7 MiB | 00m00s [106/139] Installing perl-libs-4:5.40.3 100% | 290.9 MiB/s | 9.9 MiB | 00m00s [107/139] Installing perl-interpreter-4 100% | 9.0 MiB/s | 120.1 KiB | 00m00s [108/139] Installing perl-TermReadKey-0 100% | 64.6 MiB/s | 66.2 KiB | 00m00s [109/139] Installing perl-Error-1:0.170 100% | 78.1 MiB/s | 80.0 KiB | 00m00s [110/139] Installing perl-lib-0:0.65-51 100% | 0.0 B/s | 8.9 KiB | 00m00s [111/139] Installing perl-Git-0:2.51.0- 100% | 0.0 B/s | 65.4 KiB | 00m00s [112/139] Installing git-0:2.51.0-2.fc4 100% | 0.0 B/s | 57.7 KiB | 00m00s [113/139] Installing libXau-0:1.0.12-2. 100% | 76.6 MiB/s | 78.5 KiB | 00m00s [114/139] Installing libxcb-0:1.17.0-5. 100% | 270.1 MiB/s | 1.1 MiB | 00m00s [115/139] Installing libX11-0:1.8.12-1. 100% | 320.4 MiB/s | 1.3 MiB | 00m00s [116/139] Installing emacs-filesystem-1 100% | 0.0 B/s | 544.0 B | 00m00s [117/139] Installing vulkan-loader-0:1. 100% | 40.2 MiB/s | 535.0 KiB | 00m00s [118/139] Installing golang-src-0:1.24. 100% | 314.3 MiB/s | 80.2 MiB | 00m00s [119/139] Installing golang-bin-0:1.24. 100% | 450.4 MiB/s | 122.1 MiB | 00m00s [120/139] Installing golang-0:1.24.9-1. 100% | 596.7 MiB/s | 9.0 MiB | 00m00s [121/139] Installing rhash-0:1.4.5-2.fc 100% | 23.2 MiB/s | 356.4 KiB | 00m00s [122/139] Installing jsoncpp-0:1.9.6-1. 100% | 32.1 MiB/s | 263.1 KiB | 00m00s [123/139] Installing cmake-data-0:3.31. 100% | 129.5 MiB/s | 9.1 MiB | 00m00s [124/139] Installing cmake-0:3.31.6-2.f 100% | 356.5 MiB/s | 34.2 MiB | 00m00s [125/139] Installing hiredis-0:1.2.0-6. 100% | 105.1 MiB/s | 107.6 KiB | 00m00s [126/139] Installing fmt-0:11.1.4-1.fc4 100% | 28.8 MiB/s | 265.4 KiB | 00m00s [127/139] Installing ccache-0:4.10.2-2. 100% | 57.4 MiB/s | 1.5 MiB | 00m00s [128/139] Installing vulkan-loader-deve 100% | 0.0 B/s | 9.1 KiB | 00m00s [129/139] Installing vulkan-tools-0:1.4 100% | 86.7 MiB/s | 1.5 MiB | 00m00s [130/139] Installing gcc-c++-0:15.2.1-1 100% | 341.8 MiB/s | 41.4 MiB | 00m00s [131/139] Installing gcc-plugin-annobin 100% | 3.4 MiB/s | 58.6 KiB | 00m00s [132/139] Installing annobin-plugin-gcc 100% | 74.8 MiB/s | 995.1 KiB | 00m00s [133/139] Installing authselect-0:1.5.1 100% | 10.3 MiB/s | 158.2 KiB | 00m00s [134/139] Installing pam-0:1.7.0-6.fc42 100% | 73.9 MiB/s | 1.7 MiB | 00m00s [135/139] Installing glslang-0:15.3.0-1 100% | 169.1 MiB/s | 3.4 MiB | 00m00s [136/139] Installing glslc-0:2025.2-1.f 100% | 167.4 MiB/s | 3.3 MiB | 00m00s [137/139] Installing vulkan-validation- 100% | 473.5 MiB/s | 18.5 MiB | 00m00s [138/139] Installing systemd-rpm-macros 100% | 0.0 B/s | 11.3 KiB | 00m00s [139/139] Installing patchelf-0:0.18.0- 100% | 1.4 MiB/s | 289.4 KiB | 00m00s Complete! Finish: build setup for ollama-0.12.6-1.fc42.src.rpm Start: rpmbuild ollama-0.12.6-1.fc42.src.rpm Building target platforms: x86_64 Building for target x86_64 warning: Macro expanded in comment on line 111: %{_libdir}/ollama/libggml-cuda.so setting SOURCE_DATE_EPOCH=1761436800 Executing(%mkbuilddir): /bin/sh -e /var/tmp/rpm-tmp.FXisT3 Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.pwG6oq + umask 022 + cd /builddir/build/BUILD/ollama-0.12.6-build + cd /builddir/build/BUILD/ollama-0.12.6-build + rm -rf ollama-0.12.6 + /usr/lib/rpm/rpmuncompress -x -v /builddir/build/SOURCES/v0.12.6.zip TZ=UTC /usr/bin/unzip -u '/builddir/build/SOURCES/v0.12.6.zip' Archive: /builddir/build/SOURCES/v0.12.6.zip 1813ff85a027d7d4d76761a2bf12c2198dfaa0cf creating: ollama-0.12.6/ inflating: ollama-0.12.6/.dockerignore inflating: ollama-0.12.6/.gitattributes creating: ollama-0.12.6/.github/ creating: ollama-0.12.6/.github/ISSUE_TEMPLATE/ inflating: ollama-0.12.6/.github/ISSUE_TEMPLATE/10_bug_report.yml inflating: ollama-0.12.6/.github/ISSUE_TEMPLATE/20_feature_request.md inflating: ollama-0.12.6/.github/ISSUE_TEMPLATE/30_model_request.md inflating: ollama-0.12.6/.github/ISSUE_TEMPLATE/config.yml creating: ollama-0.12.6/.github/workflows/ inflating: ollama-0.12.6/.github/workflows/latest.yaml inflating: ollama-0.12.6/.github/workflows/release.yaml inflating: ollama-0.12.6/.github/workflows/test.yaml inflating: ollama-0.12.6/.gitignore inflating: ollama-0.12.6/.golangci.yaml inflating: ollama-0.12.6/CMakeLists.txt inflating: ollama-0.12.6/CMakePresets.json inflating: ollama-0.12.6/CONTRIBUTING.md inflating: ollama-0.12.6/Dockerfile inflating: ollama-0.12.6/LICENSE inflating: ollama-0.12.6/Makefile.sync inflating: ollama-0.12.6/README.md inflating: ollama-0.12.6/SECURITY.md creating: ollama-0.12.6/api/ inflating: ollama-0.12.6/api/client.go inflating: ollama-0.12.6/api/client_test.go creating: ollama-0.12.6/api/examples/ inflating: ollama-0.12.6/api/examples/README.md creating: ollama-0.12.6/api/examples/chat/ inflating: ollama-0.12.6/api/examples/chat/main.go creating: ollama-0.12.6/api/examples/generate-streaming/ inflating: ollama-0.12.6/api/examples/generate-streaming/main.go creating: ollama-0.12.6/api/examples/generate/ inflating: ollama-0.12.6/api/examples/generate/main.go creating: ollama-0.12.6/api/examples/multimodal/ inflating: ollama-0.12.6/api/examples/multimodal/main.go creating: ollama-0.12.6/api/examples/pull-progress/ inflating: ollama-0.12.6/api/examples/pull-progress/main.go inflating: ollama-0.12.6/api/types.go inflating: ollama-0.12.6/api/types_test.go inflating: ollama-0.12.6/api/types_typescript_test.go creating: ollama-0.12.6/app/ extracting: ollama-0.12.6/app/.gitignore inflating: ollama-0.12.6/app/README.md creating: ollama-0.12.6/app/assets/ inflating: ollama-0.12.6/app/assets/app.ico inflating: ollama-0.12.6/app/assets/assets.go inflating: ollama-0.12.6/app/assets/setup.bmp inflating: ollama-0.12.6/app/assets/tray.ico inflating: ollama-0.12.6/app/assets/tray_upgrade.ico creating: ollama-0.12.6/app/lifecycle/ inflating: ollama-0.12.6/app/lifecycle/getstarted_nonwindows.go inflating: ollama-0.12.6/app/lifecycle/getstarted_windows.go inflating: ollama-0.12.6/app/lifecycle/lifecycle.go inflating: ollama-0.12.6/app/lifecycle/logging.go inflating: ollama-0.12.6/app/lifecycle/logging_nonwindows.go inflating: ollama-0.12.6/app/lifecycle/logging_test.go inflating: ollama-0.12.6/app/lifecycle/logging_windows.go inflating: ollama-0.12.6/app/lifecycle/paths.go inflating: ollama-0.12.6/app/lifecycle/server.go inflating: ollama-0.12.6/app/lifecycle/server_unix.go inflating: ollama-0.12.6/app/lifecycle/server_windows.go inflating: ollama-0.12.6/app/lifecycle/updater.go inflating: ollama-0.12.6/app/lifecycle/updater_nonwindows.go inflating: ollama-0.12.6/app/lifecycle/updater_windows.go inflating: ollama-0.12.6/app/main.go inflating: ollama-0.12.6/app/ollama.iss inflating: ollama-0.12.6/app/ollama.rc inflating: ollama-0.12.6/app/ollama_welcome.ps1 creating: ollama-0.12.6/app/store/ inflating: ollama-0.12.6/app/store/store.go inflating: ollama-0.12.6/app/store/store_darwin.go inflating: ollama-0.12.6/app/store/store_linux.go inflating: ollama-0.12.6/app/store/store_windows.go creating: ollama-0.12.6/app/tray/ creating: ollama-0.12.6/app/tray/commontray/ inflating: ollama-0.12.6/app/tray/commontray/types.go inflating: ollama-0.12.6/app/tray/tray.go inflating: ollama-0.12.6/app/tray/tray_nonwindows.go inflating: ollama-0.12.6/app/tray/tray_windows.go creating: ollama-0.12.6/app/tray/wintray/ inflating: ollama-0.12.6/app/tray/wintray/eventloop.go inflating: ollama-0.12.6/app/tray/wintray/menus.go inflating: ollama-0.12.6/app/tray/wintray/messages.go inflating: ollama-0.12.6/app/tray/wintray/notifyicon.go inflating: ollama-0.12.6/app/tray/wintray/tray.go inflating: ollama-0.12.6/app/tray/wintray/w32api.go inflating: ollama-0.12.6/app/tray/wintray/winclass.go creating: ollama-0.12.6/auth/ inflating: ollama-0.12.6/auth/auth.go creating: ollama-0.12.6/cmd/ inflating: ollama-0.12.6/cmd/cmd.go inflating: ollama-0.12.6/cmd/cmd_test.go inflating: ollama-0.12.6/cmd/interactive.go inflating: ollama-0.12.6/cmd/interactive_test.go creating: ollama-0.12.6/cmd/runner/ inflating: ollama-0.12.6/cmd/runner/main.go inflating: ollama-0.12.6/cmd/start.go inflating: ollama-0.12.6/cmd/start_darwin.go inflating: ollama-0.12.6/cmd/start_default.go inflating: ollama-0.12.6/cmd/start_windows.go inflating: ollama-0.12.6/cmd/warn_thinking_test.go creating: ollama-0.12.6/convert/ inflating: ollama-0.12.6/convert/convert.go inflating: ollama-0.12.6/convert/convert_bert.go inflating: ollama-0.12.6/convert/convert_commandr.go inflating: ollama-0.12.6/convert/convert_gemma.go inflating: ollama-0.12.6/convert/convert_gemma2.go inflating: ollama-0.12.6/convert/convert_gemma2_adapter.go inflating: ollama-0.12.6/convert/convert_gemma3.go inflating: ollama-0.12.6/convert/convert_gemma3n.go inflating: ollama-0.12.6/convert/convert_gptoss.go inflating: ollama-0.12.6/convert/convert_llama.go inflating: ollama-0.12.6/convert/convert_llama4.go inflating: ollama-0.12.6/convert/convert_llama_adapter.go inflating: ollama-0.12.6/convert/convert_mistral.go inflating: ollama-0.12.6/convert/convert_mixtral.go inflating: ollama-0.12.6/convert/convert_mllama.go inflating: ollama-0.12.6/convert/convert_phi3.go inflating: ollama-0.12.6/convert/convert_qwen2.go inflating: ollama-0.12.6/convert/convert_qwen25vl.go inflating: ollama-0.12.6/convert/convert_test.go inflating: ollama-0.12.6/convert/reader.go inflating: ollama-0.12.6/convert/reader_safetensors.go inflating: ollama-0.12.6/convert/reader_test.go inflating: ollama-0.12.6/convert/reader_torch.go creating: ollama-0.12.6/convert/sentencepiece/ inflating: ollama-0.12.6/convert/sentencepiece/sentencepiece_model.pb.go inflating: ollama-0.12.6/convert/sentencepiece_model.proto inflating: ollama-0.12.6/convert/tensor.go inflating: ollama-0.12.6/convert/tensor_test.go creating: ollama-0.12.6/convert/testdata/ inflating: ollama-0.12.6/convert/testdata/Meta-Llama-3-8B-Instruct.json inflating: ollama-0.12.6/convert/testdata/Meta-Llama-3.1-8B-Instruct.json inflating: ollama-0.12.6/convert/testdata/Mistral-7B-Instruct-v0.2.json inflating: ollama-0.12.6/convert/testdata/Mixtral-8x7B-Instruct-v0.1.json inflating: ollama-0.12.6/convert/testdata/Phi-3-mini-128k-instruct.json inflating: ollama-0.12.6/convert/testdata/Qwen2.5-0.5B-Instruct.json inflating: ollama-0.12.6/convert/testdata/all-MiniLM-L6-v2.json inflating: ollama-0.12.6/convert/testdata/c4ai-command-r-v01.json inflating: ollama-0.12.6/convert/testdata/gemma-2-2b-it.json inflating: ollama-0.12.6/convert/testdata/gemma-2-9b-it.json inflating: ollama-0.12.6/convert/testdata/gemma-2b-it.json inflating: ollama-0.12.6/convert/tokenizer.go inflating: ollama-0.12.6/convert/tokenizer_spm.go inflating: ollama-0.12.6/convert/tokenizer_test.go creating: ollama-0.12.6/discover/ inflating: ollama-0.12.6/discover/cpu_linux.go inflating: ollama-0.12.6/discover/cpu_linux_test.go inflating: ollama-0.12.6/discover/cpu_windows.go inflating: ollama-0.12.6/discover/cpu_windows_test.go inflating: ollama-0.12.6/discover/gpu.go inflating: ollama-0.12.6/discover/gpu_darwin.go inflating: ollama-0.12.6/discover/gpu_info_darwin.h inflating: ollama-0.12.6/discover/gpu_info_darwin.m inflating: ollama-0.12.6/discover/path.go inflating: ollama-0.12.6/discover/runner.go inflating: ollama-0.12.6/discover/runner_test.go inflating: ollama-0.12.6/discover/types.go creating: ollama-0.12.6/docs/ inflating: ollama-0.12.6/docs/README.md inflating: ollama-0.12.6/docs/api.md inflating: ollama-0.12.6/docs/cloud.md inflating: ollama-0.12.6/docs/development.md inflating: ollama-0.12.6/docs/docker.md inflating: ollama-0.12.6/docs/examples.md inflating: ollama-0.12.6/docs/faq.md inflating: ollama-0.12.6/docs/gpu.md creating: ollama-0.12.6/docs/images/ inflating: ollama-0.12.6/docs/images/ollama-keys.png inflating: ollama-0.12.6/docs/images/signup.png inflating: ollama-0.12.6/docs/import.md inflating: ollama-0.12.6/docs/linux.md inflating: ollama-0.12.6/docs/macos.md inflating: ollama-0.12.6/docs/modelfile.md inflating: ollama-0.12.6/docs/openai.md inflating: ollama-0.12.6/docs/template.md inflating: ollama-0.12.6/docs/troubleshooting.md inflating: ollama-0.12.6/docs/windows.md creating: ollama-0.12.6/envconfig/ inflating: ollama-0.12.6/envconfig/config.go inflating: ollama-0.12.6/envconfig/config_test.go creating: ollama-0.12.6/format/ inflating: ollama-0.12.6/format/bytes.go inflating: ollama-0.12.6/format/bytes_test.go inflating: ollama-0.12.6/format/format.go inflating: ollama-0.12.6/format/format_test.go inflating: ollama-0.12.6/format/time.go inflating: ollama-0.12.6/format/time_test.go creating: ollama-0.12.6/fs/ inflating: ollama-0.12.6/fs/config.go creating: ollama-0.12.6/fs/ggml/ inflating: ollama-0.12.6/fs/ggml/ggml.go inflating: ollama-0.12.6/fs/ggml/ggml_test.go inflating: ollama-0.12.6/fs/ggml/gguf.go inflating: ollama-0.12.6/fs/ggml/gguf_test.go inflating: ollama-0.12.6/fs/ggml/type.go creating: ollama-0.12.6/fs/gguf/ inflating: ollama-0.12.6/fs/gguf/gguf.go inflating: ollama-0.12.6/fs/gguf/gguf_test.go inflating: ollama-0.12.6/fs/gguf/keyvalue.go inflating: ollama-0.12.6/fs/gguf/keyvalue_test.go inflating: ollama-0.12.6/fs/gguf/lazy.go inflating: ollama-0.12.6/fs/gguf/reader.go inflating: ollama-0.12.6/fs/gguf/tensor.go creating: ollama-0.12.6/fs/util/ creating: ollama-0.12.6/fs/util/bufioutil/ inflating: ollama-0.12.6/fs/util/bufioutil/buffer_seeker.go inflating: ollama-0.12.6/fs/util/bufioutil/buffer_seeker_test.go inflating: ollama-0.12.6/go.mod inflating: ollama-0.12.6/go.sum creating: ollama-0.12.6/harmony/ inflating: ollama-0.12.6/harmony/harmonyparser.go inflating: ollama-0.12.6/harmony/harmonyparser_test.go creating: ollama-0.12.6/integration/ inflating: ollama-0.12.6/integration/README.md inflating: ollama-0.12.6/integration/api_test.go inflating: ollama-0.12.6/integration/basic_test.go inflating: ollama-0.12.6/integration/concurrency_test.go inflating: ollama-0.12.6/integration/context_test.go inflating: ollama-0.12.6/integration/embed_test.go inflating: ollama-0.12.6/integration/library_models_test.go inflating: ollama-0.12.6/integration/llm_image_test.go inflating: ollama-0.12.6/integration/max_queue_test.go inflating: ollama-0.12.6/integration/model_arch_test.go inflating: ollama-0.12.6/integration/model_perf_test.go inflating: ollama-0.12.6/integration/quantization_test.go creating: ollama-0.12.6/integration/testdata/ inflating: ollama-0.12.6/integration/testdata/embed.json inflating: ollama-0.12.6/integration/testdata/shakespeare.txt inflating: ollama-0.12.6/integration/utils_test.go creating: ollama-0.12.6/kvcache/ inflating: ollama-0.12.6/kvcache/cache.go inflating: ollama-0.12.6/kvcache/causal.go inflating: ollama-0.12.6/kvcache/causal_test.go inflating: ollama-0.12.6/kvcache/encoder.go inflating: ollama-0.12.6/kvcache/wrapper.go creating: ollama-0.12.6/llama/ extracting: ollama-0.12.6/llama/.gitignore inflating: ollama-0.12.6/llama/README.md inflating: ollama-0.12.6/llama/build-info.cpp inflating: ollama-0.12.6/llama/build-info.cpp.in creating: ollama-0.12.6/llama/llama.cpp/ inflating: ollama-0.12.6/llama/llama.cpp/.rsync-filter inflating: ollama-0.12.6/llama/llama.cpp/LICENSE creating: ollama-0.12.6/llama/llama.cpp/common/ inflating: ollama-0.12.6/llama/llama.cpp/common/base64.hpp inflating: ollama-0.12.6/llama/llama.cpp/common/common.cpp inflating: ollama-0.12.6/llama/llama.cpp/common/common.go inflating: ollama-0.12.6/llama/llama.cpp/common/common.h inflating: ollama-0.12.6/llama/llama.cpp/common/json-schema-to-grammar.cpp inflating: ollama-0.12.6/llama/llama.cpp/common/json-schema-to-grammar.h inflating: ollama-0.12.6/llama/llama.cpp/common/log.cpp inflating: ollama-0.12.6/llama/llama.cpp/common/log.h inflating: ollama-0.12.6/llama/llama.cpp/common/sampling.cpp inflating: ollama-0.12.6/llama/llama.cpp/common/sampling.h creating: ollama-0.12.6/llama/llama.cpp/include/ inflating: ollama-0.12.6/llama/llama.cpp/include/llama-cpp.h inflating: ollama-0.12.6/llama/llama.cpp/include/llama.h creating: ollama-0.12.6/llama/llama.cpp/src/ inflating: ollama-0.12.6/llama/llama.cpp/src/llama-adapter.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-adapter.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-arch.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-arch.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-batch.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-batch.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-chat.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-chat.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-context.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-context.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-cparams.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-cparams.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-grammar.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-grammar.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-graph.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-graph.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-hparams.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-hparams.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-impl.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-impl.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-io.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-io.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-kv-cache-iswa.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-kv-cache-iswa.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-kv-cache.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-kv-cache.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-kv-cells.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-memory-hybrid.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-memory-hybrid.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-memory-recurrent.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-memory-recurrent.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-memory.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-memory.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-mmap.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-mmap.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-model-loader.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-model-loader.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-model-saver.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-model-saver.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-model.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-model.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-quant.cpp extracting: ollama-0.12.6/llama/llama.cpp/src/llama-quant.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-sampling.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-sampling.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama-vocab.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama-vocab.h inflating: ollama-0.12.6/llama/llama.cpp/src/llama.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/llama.go inflating: ollama-0.12.6/llama/llama.cpp/src/unicode-data.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/unicode-data.h inflating: ollama-0.12.6/llama/llama.cpp/src/unicode.cpp inflating: ollama-0.12.6/llama/llama.cpp/src/unicode.h creating: ollama-0.12.6/llama/llama.cpp/tools/ creating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/ inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/clip-impl.h inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/clip.cpp inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/clip.h inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/mtmd-audio.cpp inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/mtmd-audio.h inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/mtmd-helper.cpp inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/mtmd-helper.h inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/mtmd.cpp inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/mtmd.go inflating: ollama-0.12.6/llama/llama.cpp/tools/mtmd/mtmd.h creating: ollama-0.12.6/llama/llama.cpp/vendor/ creating: ollama-0.12.6/llama/llama.cpp/vendor/miniaudio/ inflating: ollama-0.12.6/llama/llama.cpp/vendor/miniaudio/miniaudio.h creating: ollama-0.12.6/llama/llama.cpp/vendor/nlohmann/ inflating: ollama-0.12.6/llama/llama.cpp/vendor/nlohmann/json.hpp inflating: ollama-0.12.6/llama/llama.cpp/vendor/nlohmann/json_fwd.hpp creating: ollama-0.12.6/llama/llama.cpp/vendor/stb/ inflating: ollama-0.12.6/llama/llama.cpp/vendor/stb/stb_image.h inflating: ollama-0.12.6/llama/llama.go inflating: ollama-0.12.6/llama/llama_test.go creating: ollama-0.12.6/llama/patches/ extracting: ollama-0.12.6/llama/patches/.gitignore inflating: ollama-0.12.6/llama/patches/0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch inflating: ollama-0.12.6/llama/patches/0002-pretokenizer.patch inflating: ollama-0.12.6/llama/patches/0003-clip-unicode.patch inflating: ollama-0.12.6/llama/patches/0004-solar-pro.patch inflating: ollama-0.12.6/llama/patches/0005-fix-deepseek-deseret-regex.patch inflating: ollama-0.12.6/llama/patches/0006-maintain-ordering-for-rules-for-grammar.patch inflating: ollama-0.12.6/llama/patches/0007-sort-devices-by-score.patch inflating: ollama-0.12.6/llama/patches/0008-add-phony-target-ggml-cpu-for-all-cpu-variants.patch inflating: ollama-0.12.6/llama/patches/0009-remove-amx.patch inflating: ollama-0.12.6/llama/patches/0010-fix-string-arr-kv-loading.patch inflating: ollama-0.12.6/llama/patches/0011-ollama-debug-tensor.patch inflating: ollama-0.12.6/llama/patches/0012-add-ollama-vocab-for-grammar-support.patch inflating: ollama-0.12.6/llama/patches/0013-add-argsort-and-cuda-copy-for-i32.patch inflating: ollama-0.12.6/llama/patches/0014-graph-memory-reporting-on-failure.patch inflating: ollama-0.12.6/llama/patches/0015-ggml-Export-GPU-UUIDs.patch inflating: ollama-0.12.6/llama/patches/0016-add-C-API-for-mtmd_input_text.patch inflating: ollama-0.12.6/llama/patches/0017-no-power-throttling-win32-with-gnuc.patch inflating: ollama-0.12.6/llama/patches/0018-BF16-macos-version-guard.patch inflating: ollama-0.12.6/llama/patches/0019-Enable-CUDA-Graphs-for-gemma3n.patch inflating: ollama-0.12.6/llama/patches/0020-Disable-ggml-blas-on-macos-v13-and-older.patch inflating: ollama-0.12.6/llama/patches/0021-fix-mtmd-audio.cpp-build-on-windows.patch inflating: ollama-0.12.6/llama/patches/0022-ggml-No-alloc-mode.patch inflating: ollama-0.12.6/llama/patches/0023-decode-disable-output_all.patch inflating: ollama-0.12.6/llama/patches/0024-ggml-Enable-resetting-backend-devices.patch inflating: ollama-0.12.6/llama/patches/0025-harden-uncaught-exception-registration.patch inflating: ollama-0.12.6/llama/patches/0026-GPU-discovery-enhancements.patch inflating: ollama-0.12.6/llama/patches/0027-vulkan-get-GPU-ID-ollama-v0.11.5.patch inflating: ollama-0.12.6/llama/patches/0028-vulkan-pci-and-memory.patch inflating: ollama-0.12.6/llama/patches/0029-NVML-fallback-for-unified-memory-GPUs.patch inflating: ollama-0.12.6/llama/patches/0030-CUDA-Changing-the-CUDA-scheduling-strategy-to-spin-1.patch inflating: ollama-0.12.6/llama/sampling_ext.cpp inflating: ollama-0.12.6/llama/sampling_ext.h creating: ollama-0.12.6/llm/ inflating: ollama-0.12.6/llm/llm_darwin.go inflating: ollama-0.12.6/llm/llm_linux.go inflating: ollama-0.12.6/llm/llm_windows.go inflating: ollama-0.12.6/llm/memory.go inflating: ollama-0.12.6/llm/memory_test.go inflating: ollama-0.12.6/llm/server.go inflating: ollama-0.12.6/llm/server_test.go inflating: ollama-0.12.6/llm/status.go creating: ollama-0.12.6/logutil/ inflating: ollama-0.12.6/logutil/logutil.go creating: ollama-0.12.6/macapp/ inflating: ollama-0.12.6/macapp/.eslintrc.json inflating: ollama-0.12.6/macapp/.gitignore inflating: ollama-0.12.6/macapp/README.md creating: ollama-0.12.6/macapp/assets/ inflating: ollama-0.12.6/macapp/assets/icon.icns extracting: ollama-0.12.6/macapp/assets/iconDarkTemplate.png extracting: ollama-0.12.6/macapp/assets/iconDarkTemplate@2x.png extracting: ollama-0.12.6/macapp/assets/iconDarkUpdateTemplate.png extracting: ollama-0.12.6/macapp/assets/iconDarkUpdateTemplate@2x.png extracting: ollama-0.12.6/macapp/assets/iconTemplate.png extracting: ollama-0.12.6/macapp/assets/iconTemplate@2x.png extracting: ollama-0.12.6/macapp/assets/iconUpdateTemplate.png extracting: ollama-0.12.6/macapp/assets/iconUpdateTemplate@2x.png inflating: ollama-0.12.6/macapp/forge.config.ts inflating: ollama-0.12.6/macapp/package-lock.json inflating: ollama-0.12.6/macapp/package.json inflating: ollama-0.12.6/macapp/postcss.config.js creating: ollama-0.12.6/macapp/src/ inflating: ollama-0.12.6/macapp/src/app.css inflating: ollama-0.12.6/macapp/src/app.tsx inflating: ollama-0.12.6/macapp/src/declarations.d.ts inflating: ollama-0.12.6/macapp/src/index.html inflating: ollama-0.12.6/macapp/src/index.ts inflating: ollama-0.12.6/macapp/src/install.ts inflating: ollama-0.12.6/macapp/src/ollama.svg extracting: ollama-0.12.6/macapp/src/preload.ts inflating: ollama-0.12.6/macapp/src/renderer.tsx inflating: ollama-0.12.6/macapp/tailwind.config.js inflating: ollama-0.12.6/macapp/tsconfig.json inflating: ollama-0.12.6/macapp/webpack.main.config.ts inflating: ollama-0.12.6/macapp/webpack.plugins.ts inflating: ollama-0.12.6/macapp/webpack.renderer.config.ts inflating: ollama-0.12.6/macapp/webpack.rules.ts inflating: ollama-0.12.6/main.go creating: ollama-0.12.6/middleware/ inflating: ollama-0.12.6/middleware/openai.go inflating: ollama-0.12.6/middleware/openai_test.go creating: ollama-0.12.6/ml/ inflating: ollama-0.12.6/ml/backend.go creating: ollama-0.12.6/ml/backend/ inflating: ollama-0.12.6/ml/backend/backend.go creating: ollama-0.12.6/ml/backend/ggml/ inflating: ollama-0.12.6/ml/backend/ggml/ggml.go creating: ollama-0.12.6/ml/backend/ggml/ggml/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/.rsync-filter inflating: ollama-0.12.6/ml/backend/ggml/ggml/LICENSE creating: ollama-0.12.6/ml/backend/ggml/ggml/cmake/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/cmake/common.cmake creating: ollama-0.12.6/ml/backend/ggml/ggml/include/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-alloc.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-backend.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-blas.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-cann.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-cpp.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-cpu.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-cuda.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-metal.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-opencl.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-opt.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-rpc.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-sycl.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-vulkan.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml-zdnn.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ggml.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/gguf.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/include/ollama-debug.h creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/CMakeLists.txt inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-alloc.c inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-backend-impl.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-backend-reg.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-backend.cpp creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-blas/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-blas/CMakeLists.txt inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-blas/blas.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-blas/ggml-blas.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-common.h creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/CMakeLists.txt creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/common.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch-fallback.h creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/ creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/arm.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/cpu-feats.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/quants.c inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/repack.cpp creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/x86.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/cpu.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/cpu_amd64.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/cpu_arm64.go extracting: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/cpu_debug.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/hbm.h creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/llamafile.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/quants.c inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/quants.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/simd-mappings.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.h creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/acc.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/acc.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/add-id.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/add-id.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/arange.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/arange.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/argmax.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/argmax.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/argsort.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/argsort.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/clamp.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/clamp.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/common.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/concat.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/concat.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/conv-transpose-1d.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/conv-transpose-1d.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-dw.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-dw.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-transpose.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-transpose.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/conv2d.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/conv2d.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/convert.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/convert.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/count-equal.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/count-equal.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/cp-async.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/cpy-utils.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/cpy.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/cross-entropy-loss.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/cross-entropy-loss.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/dequantize.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/diagmask.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/diagmask.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/fattn-common.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/fattn-mma-f16.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/fattn-vec.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/fattn.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/fattn.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/getrows.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/getrows.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/gla.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/gla.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/im2col.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/im2col.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mean.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mean.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mma.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mmf.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mmf.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mmvf.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mmvf.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mmvq.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/mmvq.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/norm.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/norm.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-adamw.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-adamw.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-sgd.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-sgd.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/out-prod.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/out-prod.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/pad.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/pad.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/pad_reflect_1d.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/pad_reflect_1d.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/pool2d.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/pool2d.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/quantize.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/quantize.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/reduce_rows.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/roll.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/roll.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/rope.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/rope.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/scale.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/scale.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/set-rows.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/set-rows.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/softcap.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/softcap.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/softmax.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/softmax.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/ssm-conv.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/ssm-conv.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/ssm-scan.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/ssm-scan.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/sum.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/sum.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/sumrows.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/sumrows.cuh creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-tile-instance-dkq112-dv112.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-tile-instance-dkq128-dv128.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-tile-instance-dkq256-dv256.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-tile-instance-dkq40-dv40.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-tile-instance-dkq576-dv512.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-tile-instance-dkq64-dv64.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-tile-instance-dkq80-dv80.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-tile-instance-dkq96-dv96.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-f16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q4_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q4_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q5_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q5_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q8_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-f16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q4_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q4_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q5_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q5_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q8_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-f16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q4_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q4_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q5_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q5_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q8_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-f16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q4_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q4_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q5_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q5_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q8_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-f16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q4_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q4_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q5_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q5_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q8_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-f16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q4_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q4_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q5_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q5_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q8_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_10.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_11.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_12.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_13.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_14.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_15.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_16.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_2.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_3.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_4.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_5.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_6.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_7.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_8.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_9.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq1_s.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_s.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_s.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-mxfp4.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q2_k.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q3_k.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_k.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_1.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_k.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q6_k.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q8_0.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/topk-moe.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/topk-moe.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/tsembd.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/tsembd.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/unary.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/unary.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/upscale.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/upscale.cuh inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/vecdotq.cuh creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/vendors/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/vendors/cuda.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/vendors/hip.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/vendors/musa.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/wkv.cu inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cuda/wkv.cuh creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-hip/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-hip/CMakeLists.txt inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/CMakeLists.txt inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-common.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-common.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-context.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-context.m inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-device.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-device.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-device.m inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.s inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-impl.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-ops.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-ops.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-metal/metal.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-opt.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-quants.c inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-quants.h inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-threading.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-threading.h creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/CMakeLists.txt inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp creating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/ inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/CMakeLists.txt inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/acc.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/add.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/add_id.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/argmax.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/argsort.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/clamp.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/concat.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/contig_copy.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/conv2d_dw.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/conv2d_mm.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/conv_transpose_1d.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/copy.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/copy_from_quant.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/copy_to_quant.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/cos.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/count_equal.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_f32.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_funcs.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_funcs_cm2.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_head.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_iq1_m.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_iq1_s.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_iq2_s.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_iq2_xs.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_iq2_xxs.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_iq3_s.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_iq3_xxs.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_iq4_nl.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_iq4_xs.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_mxfp4.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q2_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q3_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q4_0.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q4_1.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q4_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q5_0.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q5_1.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q5_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q6_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/dequant_q8_0.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/diag_mask_inf.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/div.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/exp.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/flash_attn.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/flash_attn_base.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/flash_attn_cm1.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/flash_attn_cm2.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/flash_attn_split_k_reduce.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/geglu.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/geglu_erf.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/geglu_quick.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/gelu.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/gelu_erf.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/gelu_quick.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/generic_binary_head.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/generic_head.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/generic_unary_head.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/get_rows.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/get_rows_quant.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/glu_head.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/glu_main.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/group_norm.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/hardsigmoid.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/hardswish.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/im2col.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/im2col_3d.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/l2_norm.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/leaky_relu.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_split_k_reduce.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_base.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_iq1_m.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_iq1_s.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_iq2_s.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_iq2_xs.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_iq2_xxs.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_iq3_s.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_iq3_xxs.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_nc.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_p021.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_q2_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_q3_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_q4_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_q5_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vec_q6_k.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mat_vecq.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_cm2.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_funcs.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mmq.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/mul_mmq_funcs.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/multi_add.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/norm.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/opt_step_adamw.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/opt_step_sgd.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/pad.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/pool2d.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/quantize_q8_1.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/reglu.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/relu.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/repeat.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/repeat_back.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/rms_norm.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/rms_norm_back.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/rms_norm_partials.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/roll.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/rope_head.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/rope_multi.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/rope_neox.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/rope_norm.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/rope_vision.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/rte.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/scale.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/sigmoid.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/silu.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/silu_back.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/sin.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/soft_max.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/soft_max_back.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/sqrt.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/square.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/sub.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/sum_rows.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/swiglu.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/swiglu_oai.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/tanh.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/timestep_embedding.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/types.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/upscale.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/utils.glsl inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/wkv6.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/wkv7.comp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml.c inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ggml_darwin_arm64.go inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/gguf.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/mem_hip.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/mem_nvml.cpp inflating: ollama-0.12.6/ml/backend/ggml/ggml/src/ollama-debug.c inflating: ollama-0.12.6/ml/backend/ggml/quantization.go inflating: ollama-0.12.6/ml/backend/ggml/threads.go inflating: ollama-0.12.6/ml/backend/ggml/threads_debug.go inflating: ollama-0.12.6/ml/device.go creating: ollama-0.12.6/ml/nn/ inflating: ollama-0.12.6/ml/nn/attention.go inflating: ollama-0.12.6/ml/nn/convolution.go inflating: ollama-0.12.6/ml/nn/embedding.go creating: ollama-0.12.6/ml/nn/fast/ inflating: ollama-0.12.6/ml/nn/fast/rope.go inflating: ollama-0.12.6/ml/nn/linear.go inflating: ollama-0.12.6/ml/nn/normalization.go creating: ollama-0.12.6/ml/nn/pooling/ inflating: ollama-0.12.6/ml/nn/pooling/pooling.go inflating: ollama-0.12.6/ml/nn/pooling/pooling_test.go creating: ollama-0.12.6/ml/nn/rope/ inflating: ollama-0.12.6/ml/nn/rope/rope.go creating: ollama-0.12.6/model/ inflating: ollama-0.12.6/model/bytepairencoding.go inflating: ollama-0.12.6/model/bytepairencoding_test.go creating: ollama-0.12.6/model/imageproc/ inflating: ollama-0.12.6/model/imageproc/images.go inflating: ollama-0.12.6/model/imageproc/images_test.go creating: ollama-0.12.6/model/input/ inflating: ollama-0.12.6/model/input/input.go inflating: ollama-0.12.6/model/model.go inflating: ollama-0.12.6/model/model_test.go creating: ollama-0.12.6/model/models/ creating: ollama-0.12.6/model/models/bert/ inflating: ollama-0.12.6/model/models/bert/embed.go creating: ollama-0.12.6/model/models/deepseek2/ inflating: ollama-0.12.6/model/models/deepseek2/model.go creating: ollama-0.12.6/model/models/gemma2/ inflating: ollama-0.12.6/model/models/gemma2/model.go creating: ollama-0.12.6/model/models/gemma3/ inflating: ollama-0.12.6/model/models/gemma3/embed.go inflating: ollama-0.12.6/model/models/gemma3/model.go inflating: ollama-0.12.6/model/models/gemma3/model_text.go inflating: ollama-0.12.6/model/models/gemma3/model_vision.go inflating: ollama-0.12.6/model/models/gemma3/process_image.go creating: ollama-0.12.6/model/models/gemma3n/ inflating: ollama-0.12.6/model/models/gemma3n/model.go inflating: ollama-0.12.6/model/models/gemma3n/model_text.go creating: ollama-0.12.6/model/models/gptoss/ inflating: ollama-0.12.6/model/models/gptoss/model.go creating: ollama-0.12.6/model/models/llama/ inflating: ollama-0.12.6/model/models/llama/model.go creating: ollama-0.12.6/model/models/llama4/ inflating: ollama-0.12.6/model/models/llama4/model.go inflating: ollama-0.12.6/model/models/llama4/model_text.go inflating: ollama-0.12.6/model/models/llama4/model_vision.go inflating: ollama-0.12.6/model/models/llama4/process_image.go inflating: ollama-0.12.6/model/models/llama4/process_image_test.go creating: ollama-0.12.6/model/models/mistral3/ inflating: ollama-0.12.6/model/models/mistral3/imageproc.go inflating: ollama-0.12.6/model/models/mistral3/model.go inflating: ollama-0.12.6/model/models/mistral3/model_text.go inflating: ollama-0.12.6/model/models/mistral3/model_vision.go creating: ollama-0.12.6/model/models/mllama/ inflating: ollama-0.12.6/model/models/mllama/model.go inflating: ollama-0.12.6/model/models/mllama/model_text.go inflating: ollama-0.12.6/model/models/mllama/model_vision.go inflating: ollama-0.12.6/model/models/mllama/process_image.go inflating: ollama-0.12.6/model/models/mllama/process_image_test.go inflating: ollama-0.12.6/model/models/models.go creating: ollama-0.12.6/model/models/qwen2/ inflating: ollama-0.12.6/model/models/qwen2/model.go creating: ollama-0.12.6/model/models/qwen25vl/ inflating: ollama-0.12.6/model/models/qwen25vl/model.go inflating: ollama-0.12.6/model/models/qwen25vl/model_text.go inflating: ollama-0.12.6/model/models/qwen25vl/model_vision.go inflating: ollama-0.12.6/model/models/qwen25vl/process_image.go creating: ollama-0.12.6/model/models/qwen3/ inflating: ollama-0.12.6/model/models/qwen3/embed.go inflating: ollama-0.12.6/model/models/qwen3/model.go creating: ollama-0.12.6/model/parsers/ inflating: ollama-0.12.6/model/parsers/parsers.go inflating: ollama-0.12.6/model/parsers/parsers_test.go inflating: ollama-0.12.6/model/parsers/qwen3coder.go inflating: ollama-0.12.6/model/parsers/qwen3coder_test.go inflating: ollama-0.12.6/model/parsers/qwen3vl.go inflating: ollama-0.12.6/model/parsers/qwen3vl_nonthinking_test.go inflating: ollama-0.12.6/model/parsers/qwen3vl_thinking_test.go creating: ollama-0.12.6/model/renderers/ inflating: ollama-0.12.6/model/renderers/qwen3coder.go inflating: ollama-0.12.6/model/renderers/qwen3coder_test.go inflating: ollama-0.12.6/model/renderers/qwen3vl.go inflating: ollama-0.12.6/model/renderers/qwen3vl_nonthinking_test.go inflating: ollama-0.12.6/model/renderers/qwen3vl_test.go inflating: ollama-0.12.6/model/renderers/qwen3vl_thinking_test.go inflating: ollama-0.12.6/model/renderers/renderer.go inflating: ollama-0.12.6/model/renderers/renderer_test.go inflating: ollama-0.12.6/model/sentencepiece.go inflating: ollama-0.12.6/model/sentencepiece_test.go creating: ollama-0.12.6/model/testdata/ creating: ollama-0.12.6/model/testdata/gemma2/ inflating: ollama-0.12.6/model/testdata/gemma2/tokenizer.model creating: ollama-0.12.6/model/testdata/llama3.2/ inflating: ollama-0.12.6/model/testdata/llama3.2/encoder.json inflating: ollama-0.12.6/model/testdata/llama3.2/vocab.bpe inflating: ollama-0.12.6/model/testdata/war-and-peace.txt inflating: ollama-0.12.6/model/textprocessor.go inflating: ollama-0.12.6/model/vocabulary.go inflating: ollama-0.12.6/model/vocabulary_test.go inflating: ollama-0.12.6/model/wordpiece.go inflating: ollama-0.12.6/model/wordpiece_test.go creating: ollama-0.12.6/openai/ inflating: ollama-0.12.6/openai/openai.go inflating: ollama-0.12.6/openai/openai_test.go creating: ollama-0.12.6/parser/ inflating: ollama-0.12.6/parser/expandpath_test.go inflating: ollama-0.12.6/parser/parser.go inflating: ollama-0.12.6/parser/parser_test.go creating: ollama-0.12.6/progress/ inflating: ollama-0.12.6/progress/bar.go inflating: ollama-0.12.6/progress/progress.go inflating: ollama-0.12.6/progress/spinner.go creating: ollama-0.12.6/readline/ inflating: ollama-0.12.6/readline/buffer.go inflating: ollama-0.12.6/readline/errors.go inflating: ollama-0.12.6/readline/history.go inflating: ollama-0.12.6/readline/readline.go inflating: ollama-0.12.6/readline/readline_unix.go inflating: ollama-0.12.6/readline/readline_windows.go inflating: ollama-0.12.6/readline/term.go inflating: ollama-0.12.6/readline/term_bsd.go inflating: ollama-0.12.6/readline/term_linux.go inflating: ollama-0.12.6/readline/term_windows.go inflating: ollama-0.12.6/readline/types.go creating: ollama-0.12.6/runner/ inflating: ollama-0.12.6/runner/README.md creating: ollama-0.12.6/runner/common/ inflating: ollama-0.12.6/runner/common/stop.go inflating: ollama-0.12.6/runner/common/stop_test.go creating: ollama-0.12.6/runner/llamarunner/ inflating: ollama-0.12.6/runner/llamarunner/cache.go inflating: ollama-0.12.6/runner/llamarunner/cache_test.go inflating: ollama-0.12.6/runner/llamarunner/image.go inflating: ollama-0.12.6/runner/llamarunner/image_test.go inflating: ollama-0.12.6/runner/llamarunner/runner.go creating: ollama-0.12.6/runner/ollamarunner/ inflating: ollama-0.12.6/runner/ollamarunner/cache.go inflating: ollama-0.12.6/runner/ollamarunner/cache_test.go inflating: ollama-0.12.6/runner/ollamarunner/multimodal.go inflating: ollama-0.12.6/runner/ollamarunner/runner.go inflating: ollama-0.12.6/runner/runner.go creating: ollama-0.12.6/sample/ inflating: ollama-0.12.6/sample/samplers.go inflating: ollama-0.12.6/sample/samplers_benchmark_test.go inflating: ollama-0.12.6/sample/samplers_test.go inflating: ollama-0.12.6/sample/transforms.go inflating: ollama-0.12.6/sample/transforms_test.go creating: ollama-0.12.6/scripts/ inflating: ollama-0.12.6/scripts/build_darwin.sh inflating: ollama-0.12.6/scripts/build_docker.sh inflating: ollama-0.12.6/scripts/build_linux.sh inflating: ollama-0.12.6/scripts/build_windows.ps1 inflating: ollama-0.12.6/scripts/env.sh inflating: ollama-0.12.6/scripts/install.sh inflating: ollama-0.12.6/scripts/push_docker.sh inflating: ollama-0.12.6/scripts/tag_latest.sh creating: ollama-0.12.6/server/ inflating: ollama-0.12.6/server/auth.go inflating: ollama-0.12.6/server/create.go inflating: ollama-0.12.6/server/create_test.go inflating: ollama-0.12.6/server/download.go inflating: ollama-0.12.6/server/fixblobs.go inflating: ollama-0.12.6/server/fixblobs_test.go inflating: ollama-0.12.6/server/images.go inflating: ollama-0.12.6/server/images_test.go creating: ollama-0.12.6/server/internal/ creating: ollama-0.12.6/server/internal/cache/ creating: ollama-0.12.6/server/internal/cache/blob/ inflating: ollama-0.12.6/server/internal/cache/blob/cache.go inflating: ollama-0.12.6/server/internal/cache/blob/cache_test.go inflating: ollama-0.12.6/server/internal/cache/blob/casecheck_test.go inflating: ollama-0.12.6/server/internal/cache/blob/chunked.go inflating: ollama-0.12.6/server/internal/cache/blob/digest.go inflating: ollama-0.12.6/server/internal/cache/blob/digest_test.go creating: ollama-0.12.6/server/internal/client/ creating: ollama-0.12.6/server/internal/client/ollama/ inflating: ollama-0.12.6/server/internal/client/ollama/registry.go inflating: ollama-0.12.6/server/internal/client/ollama/registry_synctest_test.go inflating: ollama-0.12.6/server/internal/client/ollama/registry_test.go inflating: ollama-0.12.6/server/internal/client/ollama/trace.go creating: ollama-0.12.6/server/internal/internal/ creating: ollama-0.12.6/server/internal/internal/backoff/ inflating: ollama-0.12.6/server/internal/internal/backoff/backoff.go inflating: ollama-0.12.6/server/internal/internal/backoff/backoff_synctest_test.go inflating: ollama-0.12.6/server/internal/internal/backoff/backoff_test.go creating: ollama-0.12.6/server/internal/internal/names/ inflating: ollama-0.12.6/server/internal/internal/names/name.go inflating: ollama-0.12.6/server/internal/internal/names/name_test.go creating: ollama-0.12.6/server/internal/internal/stringsx/ inflating: ollama-0.12.6/server/internal/internal/stringsx/stringsx.go inflating: ollama-0.12.6/server/internal/internal/stringsx/stringsx_test.go creating: ollama-0.12.6/server/internal/internal/syncs/ inflating: ollama-0.12.6/server/internal/internal/syncs/line.go inflating: ollama-0.12.6/server/internal/internal/syncs/line_test.go inflating: ollama-0.12.6/server/internal/internal/syncs/syncs.go creating: ollama-0.12.6/server/internal/manifest/ inflating: ollama-0.12.6/server/internal/manifest/manifest.go creating: ollama-0.12.6/server/internal/registry/ inflating: ollama-0.12.6/server/internal/registry/server.go inflating: ollama-0.12.6/server/internal/registry/server_test.go creating: ollama-0.12.6/server/internal/registry/testdata/ creating: ollama-0.12.6/server/internal/registry/testdata/models/ creating: ollama-0.12.6/server/internal/registry/testdata/models/blobs/ inflating: ollama-0.12.6/server/internal/registry/testdata/models/blobs/sha256-a4e5e156ddec27e286f75328784d7106b60a4eb1d246e950a001a3f944fbda99 inflating: ollama-0.12.6/server/internal/registry/testdata/models/blobs/sha256-ecfb1acfca9c76444d622fcdc3840217bd502124a9d3687d438c19b3cb9c3cb1 creating: ollama-0.12.6/server/internal/registry/testdata/models/manifests/ creating: ollama-0.12.6/server/internal/registry/testdata/models/manifests/example.com/ creating: ollama-0.12.6/server/internal/registry/testdata/models/manifests/example.com/library/ creating: ollama-0.12.6/server/internal/registry/testdata/models/manifests/example.com/library/smol/ inflating: ollama-0.12.6/server/internal/registry/testdata/models/manifests/example.com/library/smol/latest inflating: ollama-0.12.6/server/internal/registry/testdata/registry.txt creating: ollama-0.12.6/server/internal/testutil/ inflating: ollama-0.12.6/server/internal/testutil/testutil.go inflating: ollama-0.12.6/server/layer.go inflating: ollama-0.12.6/server/manifest.go inflating: ollama-0.12.6/server/manifest_test.go inflating: ollama-0.12.6/server/model.go inflating: ollama-0.12.6/server/modelpath.go inflating: ollama-0.12.6/server/modelpath_test.go inflating: ollama-0.12.6/server/prompt.go inflating: ollama-0.12.6/server/prompt_test.go inflating: ollama-0.12.6/server/quantization.go inflating: ollama-0.12.6/server/quantization_test.go inflating: ollama-0.12.6/server/routes.go inflating: ollama-0.12.6/server/routes_create_test.go inflating: ollama-0.12.6/server/routes_debug_test.go inflating: ollama-0.12.6/server/routes_delete_test.go inflating: ollama-0.12.6/server/routes_generate_renderer_test.go inflating: ollama-0.12.6/server/routes_generate_test.go inflating: ollama-0.12.6/server/routes_harmony_streaming_test.go inflating: ollama-0.12.6/server/routes_list_test.go inflating: ollama-0.12.6/server/routes_test.go inflating: ollama-0.12.6/server/sched.go inflating: ollama-0.12.6/server/sched_test.go extracting: ollama-0.12.6/server/sparse_common.go inflating: ollama-0.12.6/server/sparse_windows.go inflating: ollama-0.12.6/server/upload.go creating: ollama-0.12.6/template/ inflating: ollama-0.12.6/template/alfred.gotmpl inflating: ollama-0.12.6/template/alfred.json inflating: ollama-0.12.6/template/alpaca.gotmpl inflating: ollama-0.12.6/template/alpaca.json inflating: ollama-0.12.6/template/chatml.gotmpl inflating: ollama-0.12.6/template/chatml.json inflating: ollama-0.12.6/template/chatqa.gotmpl inflating: ollama-0.12.6/template/chatqa.json inflating: ollama-0.12.6/template/codellama-70b-instruct.gotmpl inflating: ollama-0.12.6/template/codellama-70b-instruct.json inflating: ollama-0.12.6/template/command-r.gotmpl inflating: ollama-0.12.6/template/command-r.json inflating: ollama-0.12.6/template/falcon-instruct.gotmpl inflating: ollama-0.12.6/template/falcon-instruct.json inflating: ollama-0.12.6/template/gemma-instruct.gotmpl inflating: ollama-0.12.6/template/gemma-instruct.json inflating: ollama-0.12.6/template/gemma3-instruct.gotmpl inflating: ollama-0.12.6/template/gemma3-instruct.json inflating: ollama-0.12.6/template/granite-instruct.gotmpl inflating: ollama-0.12.6/template/granite-instruct.json inflating: ollama-0.12.6/template/index.json inflating: ollama-0.12.6/template/llama2-chat.gotmpl inflating: ollama-0.12.6/template/llama2-chat.json inflating: ollama-0.12.6/template/llama3-instruct.gotmpl inflating: ollama-0.12.6/template/llama3-instruct.json inflating: ollama-0.12.6/template/magicoder.gotmpl inflating: ollama-0.12.6/template/magicoder.json inflating: ollama-0.12.6/template/mistral-instruct.gotmpl inflating: ollama-0.12.6/template/mistral-instruct.json inflating: ollama-0.12.6/template/openchat.gotmpl inflating: ollama-0.12.6/template/openchat.json inflating: ollama-0.12.6/template/phi-3.gotmpl inflating: ollama-0.12.6/template/phi-3.json inflating: ollama-0.12.6/template/solar-instruct.gotmpl inflating: ollama-0.12.6/template/solar-instruct.json inflating: ollama-0.12.6/template/starcoder2-instruct.gotmpl inflating: ollama-0.12.6/template/starcoder2-instruct.json inflating: ollama-0.12.6/template/template.go inflating: ollama-0.12.6/template/template_test.go creating: ollama-0.12.6/template/testdata/ creating: ollama-0.12.6/template/testdata/alfred.gotmpl/ inflating: ollama-0.12.6/template/testdata/alfred.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/alfred.gotmpl/user inflating: ollama-0.12.6/template/testdata/alfred.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/alpaca.gotmpl/ inflating: ollama-0.12.6/template/testdata/alpaca.gotmpl/system-user-assistant-user extracting: ollama-0.12.6/template/testdata/alpaca.gotmpl/user inflating: ollama-0.12.6/template/testdata/alpaca.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/chatml.gotmpl/ inflating: ollama-0.12.6/template/testdata/chatml.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/chatml.gotmpl/user inflating: ollama-0.12.6/template/testdata/chatml.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/chatqa.gotmpl/ inflating: ollama-0.12.6/template/testdata/chatqa.gotmpl/system-user-assistant-user extracting: ollama-0.12.6/template/testdata/chatqa.gotmpl/user inflating: ollama-0.12.6/template/testdata/chatqa.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/codellama-70b-instruct.gotmpl/ inflating: ollama-0.12.6/template/testdata/codellama-70b-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/codellama-70b-instruct.gotmpl/user inflating: ollama-0.12.6/template/testdata/codellama-70b-instruct.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/command-r.gotmpl/ inflating: ollama-0.12.6/template/testdata/command-r.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/command-r.gotmpl/user inflating: ollama-0.12.6/template/testdata/command-r.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/falcon-instruct.gotmpl/ inflating: ollama-0.12.6/template/testdata/falcon-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.6/template/testdata/falcon-instruct.gotmpl/user inflating: ollama-0.12.6/template/testdata/falcon-instruct.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/gemma-instruct.gotmpl/ inflating: ollama-0.12.6/template/testdata/gemma-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/gemma-instruct.gotmpl/user inflating: ollama-0.12.6/template/testdata/gemma-instruct.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/gemma3-instruct.gotmpl/ inflating: ollama-0.12.6/template/testdata/gemma3-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/gemma3-instruct.gotmpl/user inflating: ollama-0.12.6/template/testdata/gemma3-instruct.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/granite-instruct.gotmpl/ inflating: ollama-0.12.6/template/testdata/granite-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.6/template/testdata/granite-instruct.gotmpl/user inflating: ollama-0.12.6/template/testdata/granite-instruct.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/llama2-chat.gotmpl/ inflating: ollama-0.12.6/template/testdata/llama2-chat.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/llama2-chat.gotmpl/user inflating: ollama-0.12.6/template/testdata/llama2-chat.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/llama3-instruct.gotmpl/ inflating: ollama-0.12.6/template/testdata/llama3-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/llama3-instruct.gotmpl/user inflating: ollama-0.12.6/template/testdata/llama3-instruct.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/magicoder.gotmpl/ inflating: ollama-0.12.6/template/testdata/magicoder.gotmpl/system-user-assistant-user extracting: ollama-0.12.6/template/testdata/magicoder.gotmpl/user inflating: ollama-0.12.6/template/testdata/magicoder.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/mistral-instruct.gotmpl/ inflating: ollama-0.12.6/template/testdata/mistral-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/mistral-instruct.gotmpl/user inflating: ollama-0.12.6/template/testdata/mistral-instruct.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/openchat.gotmpl/ inflating: ollama-0.12.6/template/testdata/openchat.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/openchat.gotmpl/user inflating: ollama-0.12.6/template/testdata/openchat.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/phi-3.gotmpl/ inflating: ollama-0.12.6/template/testdata/phi-3.gotmpl/system-user-assistant-user inflating: ollama-0.12.6/template/testdata/phi-3.gotmpl/user inflating: ollama-0.12.6/template/testdata/phi-3.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/solar-instruct.gotmpl/ inflating: ollama-0.12.6/template/testdata/solar-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.6/template/testdata/solar-instruct.gotmpl/user inflating: ollama-0.12.6/template/testdata/solar-instruct.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/starcoder2-instruct.gotmpl/ inflating: ollama-0.12.6/template/testdata/starcoder2-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.6/template/testdata/starcoder2-instruct.gotmpl/user inflating: ollama-0.12.6/template/testdata/starcoder2-instruct.gotmpl/user-assistant-user inflating: ollama-0.12.6/template/testdata/templates.jsonl creating: ollama-0.12.6/template/testdata/vicuna.gotmpl/ inflating: ollama-0.12.6/template/testdata/vicuna.gotmpl/system-user-assistant-user extracting: ollama-0.12.6/template/testdata/vicuna.gotmpl/user inflating: ollama-0.12.6/template/testdata/vicuna.gotmpl/user-assistant-user creating: ollama-0.12.6/template/testdata/zephyr.gotmpl/ inflating: ollama-0.12.6/template/testdata/zephyr.gotmpl/system-user-assistant-user extracting: ollama-0.12.6/template/testdata/zephyr.gotmpl/user inflating: ollama-0.12.6/template/testdata/zephyr.gotmpl/user-assistant-user inflating: ollama-0.12.6/template/vicuna.gotmpl inflating: ollama-0.12.6/template/vicuna.json inflating: ollama-0.12.6/template/zephyr.gotmpl inflating: ollama-0.12.6/template/zephyr.json creating: ollama-0.12.6/thinking/ inflating: ollama-0.12.6/thinking/parser.go inflating: ollama-0.12.6/thinking/parser_test.go inflating: ollama-0.12.6/thinking/template.go inflating: ollama-0.12.6/thinking/template_test.go creating: ollama-0.12.6/tools/ inflating: ollama-0.12.6/tools/template.go inflating: ollama-0.12.6/tools/template_test.go inflating: ollama-0.12.6/tools/tools.go inflating: ollama-0.12.6/tools/tools_test.go creating: ollama-0.12.6/types/ creating: ollama-0.12.6/types/errtypes/ inflating: ollama-0.12.6/types/errtypes/errtypes.go creating: ollama-0.12.6/types/model/ inflating: ollama-0.12.6/types/model/capability.go inflating: ollama-0.12.6/types/model/name.go inflating: ollama-0.12.6/types/model/name_test.go creating: ollama-0.12.6/types/model/testdata/ creating: ollama-0.12.6/types/model/testdata/fuzz/ creating: ollama-0.12.6/types/model/testdata/fuzz/FuzzName/ extracting: ollama-0.12.6/types/model/testdata/fuzz/FuzzName/d37463aa416f6bab creating: ollama-0.12.6/types/syncmap/ inflating: ollama-0.12.6/types/syncmap/syncmap.go creating: ollama-0.12.6/version/ inflating: ollama-0.12.6/version/version.go + STATUS=0 + '[' 0 -ne 0 ']' + cd ollama-0.12.6 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + cd /builddir/build/BUILD/ollama-0.12.6-build + cd ollama-0.12.6 + /usr/lib/rpm/rpmuncompress -x -v /builddir/build/SOURCES/main.zip TZ=UTC /usr/bin/unzip -u '/builddir/build/SOURCES/main.zip' Archive: /builddir/build/SOURCES/main.zip 1be385fb761ae92648392207c43ba8d1cda7152d creating: ollamad-main/ inflating: ollamad-main/README.md extracting: ollamad-main/ollamad-ld.conf inflating: ollamad-main/ollamad.conf inflating: ollamad-main/ollamad.service inflating: ollamad-main/ollamad.sysusers creating: ollamad-main/packaging/ inflating: ollamad-main/packaging/ollama.spec + STATUS=0 + '[' 0 -ne 0 ']' + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.gDQt3Q + umask 022 + cd /builddir/build/BUILD/ollama-0.12.6-build + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + cd ollama-0.12.6 + cmake -B /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6 --preset Vulkan Preset CMake variables: CMAKE_BUILD_TYPE="Release" CMAKE_INSTALL_PREFIX:PATH="/builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/dist" CMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded" -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/lib64/ccache/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/lib64/ccache/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF. -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - NOTFOUND -- Found Vulkan: /lib64/libvulkan.so (found version "1.4.313") found components: glslc glslangValidator -- Vulkan found -- GL_KHR_cooperative_matrix supported by glslc -- GL_NV_cooperative_matrix2 supported by glslc -- GL_EXT_integer_dot_product supported by glslc -- GL_EXT_bfloat16 supported by glslc -- Configuring done (0.6s) -- Generating done (0.0s) -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6 + cmake --build /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6 [ 0%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml.c:5883:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5883 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml.c:5876:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5876 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 1%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 1%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-alloc.c:108:13: warning: ‘ggml_buffer_address_less’ defined but not used [-Wunused-function] 108 | static bool ggml_buffer_address_less(struct buffer_address a, struct buffer_address b) { | ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ [ 1%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ [ 1%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o [ 2%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-quants.c:4068:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4068 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/mem_hip.cpp.o [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/mem_nvml.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/mem_nvml.cpp:12: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 3%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so [ 3%] Built target ggml-base [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 4%] Built target ggml-cpu-x64-feats [ 4%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o [ 5%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-x64.so [ 8%] Built target ggml-cpu-x64 [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 8%] Built target ggml-cpu-sse42-feats [ 9%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 9%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 9%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o [ 10%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 12%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 12%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 12%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 12%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 13%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sse42.so [ 13%] Built target ggml-cpu-sse42 [ 13%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 13%] Built target ggml-cpu-sandybridge-feats [ 14%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 14%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 14%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o [ 15%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 17%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 18%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sandybridge.so [ 18%] Built target ggml-cpu-sandybridge [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 18%] Built target ggml-cpu-haswell-feats [ 19%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 19%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 19%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o [ 20%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 22%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 23%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-haswell.so [ 23%] Built target ggml-cpu-haswell [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 23%] Built target ggml-cpu-skylakex-feats [ 23%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o [ 24%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 25%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 25%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 25%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 25%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 27%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 27%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 27%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-skylakex.so [ 27%] Built target ggml-cpu-skylakex [ 27%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 27%] Built target ggml-cpu-icelake-feats [ 27%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 27%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o [ 28%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 31%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 31%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-icelake.so [ 31%] Built target ggml-cpu-icelake [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 31%] Built target ggml-cpu-alderlake-feats [ 31%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 32%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 32%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o [ 32%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 35%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 35%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:156:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 156 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:151:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 151 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 35%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-alderlake.so [ 35%] Built target ggml-cpu-alderlake [ 35%] Creating directories for 'vulkan-shaders-gen' [ 35%] No download step for 'vulkan-shaders-gen' [ 36%] No update step for 'vulkan-shaders-gen' [ 36%] No patch step for 'vulkan-shaders-gen' [ 36%] Performing configure step for 'vulkan-shaders-gen' -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/lib64/ccache/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/lib64/ccache/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Enabling coopmat glslc support -- Enabling coopmat2 glslc support -- Enabling dot glslc support -- Enabling bfloat16 glslc support -- Configuring done (0.5s) -- Generating done (0.0s) -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders-gen-prefix/src/vulkan-shaders-gen-build [ 36%] Performing build step for 'vulkan-shaders-gen' [ 50%] Building CXX object CMakeFiles/vulkan-shaders-gen.dir/vulkan-shaders-gen.cpp.o /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp: In function ‘void {anonymous}::write_output_files()’: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:969:34: warning: comparison with string literal results in unspecified behavior [-Waddress] 969 | std::string op_file = op == "add_rms" ? "add.comp" : std::string(op) + ".comp"; | ~~~^~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp: At global scope: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:221:6: warning: ‘bool {anonymous}::is_iq_quant(const std::string&)’ defined but not used [-Wunused-function] 221 | bool is_iq_quant(const std::string& type_name) { | ^~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:217:6: warning: ‘bool {anonymous}::is_k_quant(const std::string&)’ defined but not used [-Wunused-function] 217 | bool is_k_quant(const std::string& type_name) { | ^~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:209:6: warning: ‘bool {anonymous}::is_quantized_type(const std::string&)’ defined but not used [-Wunused-function] 209 | bool is_quantized_type(const std::string& type_name) { | ^~~~~~~~~~~~~~~~~ [100%] Linking CXX executable vulkan-shaders-gen [100%] Built target vulkan-shaders-gen [ 37%] Performing install step for 'vulkan-shaders-gen' -- Installing: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/Release/./vulkan-shaders-gen [ 38%] Completed 'vulkan-shaders-gen' [ 38%] Built target vulkan-shaders-gen [ 38%] Generate vulkan shaders for wkv7.comp [ 38%] Generate vulkan shaders for acc.comp [ 38%] Generate vulkan shaders for add.comp [ 39%] Generate vulkan shaders for add_id.comp [ 39%] Generate vulkan shaders for argmax.comp [ 39%] Generate vulkan shaders for argsort.comp [ 39%] Generate vulkan shaders for clamp.comp [ 40%] Generate vulkan shaders for concat.comp [ 40%] Generate vulkan shaders for contig_copy.comp [ 40%] Generate vulkan shaders for conv2d_dw.comp [ 40%] Generate vulkan shaders for conv2d_mm.comp [ 41%] Generate vulkan shaders for conv_transpose_1d.comp [ 41%] Generate vulkan shaders for copy.comp [ 41%] Generate vulkan shaders for copy_from_quant.comp [ 41%] Generate vulkan shaders for copy_to_quant.comp [ 42%] Generate vulkan shaders for cos.comp [ 42%] Generate vulkan shaders for count_equal.comp [ 42%] Generate vulkan shaders for dequant_f32.comp [ 43%] Generate vulkan shaders for dequant_iq1_m.comp [ 43%] Generate vulkan shaders for dequant_iq1_s.comp [ 43%] Generate vulkan shaders for dequant_iq2_s.comp [ 43%] Generate vulkan shaders for dequant_iq2_xs.comp [ 44%] Generate vulkan shaders for dequant_iq2_xxs.comp [ 44%] Generate vulkan shaders for dequant_iq3_s.comp [ 44%] Generate vulkan shaders for dequant_iq3_xxs.comp [ 44%] Generate vulkan shaders for dequant_iq4_nl.comp [ 45%] Generate vulkan shaders for dequant_iq4_xs.comp [ 45%] Generate vulkan shaders for dequant_mxfp4.comp [ 45%] Generate vulkan shaders for dequant_q2_k.comp [ 45%] Generate vulkan shaders for dequant_q3_k.comp [ 46%] Generate vulkan shaders for dequant_q4_0.comp [ 46%] Generate vulkan shaders for dequant_q4_1.comp [ 46%] Generate vulkan shaders for dequant_q4_k.comp [ 47%] Generate vulkan shaders for dequant_q5_0.comp [ 47%] Generate vulkan shaders for dequant_q5_1.comp [ 47%] Generate vulkan shaders for dequant_q5_k.comp [ 47%] Generate vulkan shaders for dequant_q6_k.comp [ 48%] Generate vulkan shaders for dequant_q8_0.comp [ 48%] Generate vulkan shaders for diag_mask_inf.comp [ 48%] Generate vulkan shaders for div.comp [ 48%] Generate vulkan shaders for exp.comp [ 49%] Generate vulkan shaders for flash_attn.comp [ 49%] Generate vulkan shaders for flash_attn_cm1.comp [ 49%] Generate vulkan shaders for flash_attn_cm2.comp [ 49%] Generate vulkan shaders for flash_attn_split_k_reduce.comp [ 50%] Generate vulkan shaders for geglu.comp [ 50%] Generate vulkan shaders for geglu_erf.comp [ 50%] Generate vulkan shaders for geglu_quick.comp [ 51%] Generate vulkan shaders for gelu.comp [ 51%] Generate vulkan shaders for gelu_erf.comp [ 51%] Generate vulkan shaders for gelu_quick.comp [ 51%] Generate vulkan shaders for get_rows.comp [ 52%] Generate vulkan shaders for get_rows_quant.comp [ 52%] Generate vulkan shaders header [ 52%] Generate vulkan shaders for group_norm.comp [ 52%] Generate vulkan shaders for hardsigmoid.comp [ 52%] Generate vulkan shaders for hardswish.comp [ 53%] Generate vulkan shaders for im2col.comp [ 53%] Generate vulkan shaders for im2col_3d.comp [ 53%] Generate vulkan shaders for l2_norm.comp [ 53%] Generate vulkan shaders for leaky_relu.comp [ 54%] Generate vulkan shaders for mul.comp [ 54%] Generate vulkan shaders for mul_mat_split_k_reduce.comp [ 54%] Generate vulkan shaders for mul_mat_vec.comp [ 54%] Generate vulkan shaders for mul_mat_vec_iq1_m.comp [ 55%] Generate vulkan shaders for mul_mat_vec_iq1_s.comp [ 55%] Generate vulkan shaders for mul_mat_vec_iq2_s.comp [ 55%] Generate vulkan shaders for mul_mat_vec_iq2_xs.comp [ 56%] Generate vulkan shaders for mul_mat_vec_iq2_xxs.comp [ 56%] Generate vulkan shaders for mul_mat_vec_iq3_s.comp [ 56%] Generate vulkan shaders for mul_mat_vec_iq3_xxs.comp [ 56%] Generate vulkan shaders for mul_mat_vec_nc.comp [ 57%] Generate vulkan shaders for mul_mat_vec_p021.comp [ 57%] Generate vulkan shaders for mul_mat_vec_q2_k.comp [ 57%] Generate vulkan shaders for mul_mat_vec_q3_k.comp [ 57%] Generate vulkan shaders for mul_mat_vec_q4_k.comp [ 58%] Generate vulkan shaders for mul_mat_vec_q5_k.comp [ 58%] Generate vulkan shaders for mul_mat_vec_q6_k.comp [ 58%] Generate vulkan shaders for mul_mat_vecq.comp [ 58%] Generate vulkan shaders for mul_mm.comp [ 59%] Generate vulkan shaders for mul_mm_cm2.comp [ 59%] Generate vulkan shaders for mul_mmq.comp [ 59%] Generate vulkan shaders for multi_add.comp [ 60%] Generate vulkan shaders for norm.comp [ 60%] Generate vulkan shaders for opt_step_adamw.comp [ 60%] Generate vulkan shaders for opt_step_sgd.comp [ 60%] Generate vulkan shaders for pad.comp [ 61%] Generate vulkan shaders for pool2d.comp [ 61%] Generate vulkan shaders for quantize_q8_1.comp [ 61%] Generate vulkan shaders for reglu.comp [ 61%] Generate vulkan shaders for relu.comp [ 62%] Generate vulkan shaders for repeat.comp [ 62%] Generate vulkan shaders for repeat_back.comp [ 62%] Generate vulkan shaders for rms_norm.comp [ 62%] Generate vulkan shaders for rms_norm_back.comp [ 63%] Generate vulkan shaders for rms_norm_partials.comp [ 63%] Generate vulkan shaders for roll.comp [ 63%] Generate vulkan shaders for rope_multi.comp [ 64%] Generate vulkan shaders for rope_neox.comp [ 64%] Generate vulkan shaders for rope_norm.comp [ 64%] Generate vulkan shaders for rope_vision.comp [ 64%] Generate vulkan shaders for scale.comp [ 65%] Generate vulkan shaders for sigmoid.comp [ 65%] Generate vulkan shaders for silu.comp [ 65%] Generate vulkan shaders for silu_back.comp [ 65%] Generate vulkan shaders for sin.comp [ 66%] Generate vulkan shaders for soft_max.comp [ 66%] Generate vulkan shaders for soft_max_back.comp [ 66%] Generate vulkan shaders for sqrt.comp [ 66%] Generate vulkan shaders for square.comp [ 67%] Generate vulkan shaders for sub.comp [ 67%] Generate vulkan shaders for sum_rows.comp [ 67%] Generate vulkan shaders for swiglu.comp [ 67%] Generate vulkan shaders for swiglu_oai.comp [ 68%] Generate vulkan shaders for tanh.comp [ 68%] Generate vulkan shaders for timestep_embedding.comp [ 68%] Generate vulkan shaders for upscale.comp [ 69%] Generate vulkan shaders for wkv6.comp [ 69%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/../include/ggml-vulkan.h:3, from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:1: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp: In function ‘void ggml_backend_vk_get_device_memory(ggml_backend_vk_device_context*, size_t*, size_t*)’: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12444:29: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘long unsigned int’} and ‘int’ [-Wsign-compare] 12444 | GGML_ASSERT(ctx->device < (int) vk_instance.device_indices.size()); | ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/../include/ggml.h:278:30: note: in definition of macro ‘GGML_ASSERT’ 278 | #define GGML_ASSERT(x) if (!(x)) GGML_ABORT("GGML_ASSERT(%s) failed", #x) | ^ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12445:29: warning: comparison of integer expressions of different signedness: ‘size_t’ {aka ‘long unsigned int’} and ‘int’ [-Wsign-compare] 12445 | GGML_ASSERT(ctx->device < (int) vk_instance.device_supports_membudget.size()); | ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/../include/ggml.h:278:30: note: in definition of macro ‘GGML_ASSERT’ 278 | #define GGML_ASSERT(x) if (!(x)) GGML_ABORT("GGML_ASSERT(%s) failed", #x) | ^ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12489:23: warning: comparison of integer expressions of different signedness: ‘int’ and ‘uint32_t’ {aka ‘unsigned int’} [-Wsign-compare] 12489 | for (int i = 0; i < memprops2.memoryProperties.memoryHeapCount; i++) { | ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12497:23: warning: comparison of integer expressions of different signedness: ‘int’ and ‘uint32_t’ {aka ‘unsigned int’} [-Wsign-compare] 12497 | for (int i = 0; i < memprops2.memoryProperties.memoryHeapCount; i++) { | ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12449:40: warning: variable ‘memprops’ set but not used [-Wunused-but-set-variable] 12449 | vk::PhysicalDeviceMemoryProperties memprops = vkdev.getMemoryProperties(); | ^~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp: At global scope: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:13178:13: warning: ‘bool ggml_vk_instance_portability_enumeration_ext_available(const std::vector&)’ defined but not used [-Wunused-function] 13178 | static bool ggml_vk_instance_portability_enumeration_ext_available(const std::vector& instance_extensions) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:12570:13: warning: ‘bool ggml_backend_vk_parse_pci_bus_id(const std::string&, int*, int*, int*)’ defined but not used [-Wunused-function] 12570 | static bool ggml_backend_vk_parse_pci_bus_id(const std::string & id, int *domain, int *bus, int *device) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11929:13: warning: ‘void ggml_backend_vk_synchronize(ggml_backend_t)’ defined but not used [-Wunused-function] 11929 | static void ggml_backend_vk_synchronize(ggml_backend_t backend) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11901:13: warning: ‘bool ggml_backend_vk_cpy_tensor_async(ggml_backend_t, const ggml_tensor*, ggml_tensor*)’ defined but not used [-Wunused-function] 11901 | static bool ggml_backend_vk_cpy_tensor_async(ggml_backend_t backend, const ggml_tensor * src, ggml_tensor * dst) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11878:13: warning: ‘void ggml_backend_vk_get_tensor_async(ggml_backend_t, const ggml_tensor*, void*, size_t, size_t)’ defined but not used [-Wunused-function] 11878 | static void ggml_backend_vk_get_tensor_async(ggml_backend_t backend, const ggml_tensor * tensor, void * data, size_t offset, size_t size) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11855:13: warning: ‘void ggml_backend_vk_set_tensor_async(ggml_backend_t, ggml_tensor*, const void*, size_t, size_t)’ defined but not used [-Wunused-function] 11855 | static void ggml_backend_vk_set_tensor_async(ggml_backend_t backend, ggml_tensor * tensor, const void * data, size_t offset, size_t size) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:11761:21: warning: ‘const char* ggml_backend_vk_host_buffer_name(ggml_backend_buffer_t)’ defined but not used [-Wunused-function] 11761 | static const char * ggml_backend_vk_host_buffer_name(ggml_backend_buffer_t buffer) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5353:13: warning: ‘void ggml_vk_buffer_write_nc_async(ggml_backend_vk_context*, vk_context&, vk_buffer&, size_t, const ggml_tensor*, bool)’ defined but not used [-Wunused-function] 5353 | static void ggml_vk_buffer_write_nc_async(ggml_backend_vk_context * ctx, vk_context& subctx, vk_buffer& dst, size_t offset, const ggml_tensor * tensor, bool sync_staging = false) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5295:13: warning: ‘void ggml_vk_end_submission(vk_submission&, std::vector, std::vector)’ defined but not used [-Wunused-function] 5295 | static void ggml_vk_end_submission(vk_submission& s, std::vector wait_semaphores, std::vector signal_semaphores) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5150:18: warning: ‘vk_buffer ggml_vk_create_buffer_temp(ggml_backend_vk_context*, size_t)’ defined but not used [-Wunused-function] 5150 | static vk_buffer ggml_vk_create_buffer_temp(ggml_backend_vk_context * ctx, size_t size) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2195:13: warning: ‘void ggml_vk_wait_events(vk_context&, std::vector&&)’ defined but not used [-Wunused-function] 2195 | static void ggml_vk_wait_events(vk_context& ctx, std::vector&& events) { | ^~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:1978:18: warning: ‘vk::Event ggml_vk_create_event(ggml_backend_vk_context*)’ defined but not used [-Wunused-function] 1978 | static vk::Event ggml_vk_create_event(ggml_backend_vk_context * ctx) { | ^~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:1966:23: warning: ‘vk_semaphore* ggml_vk_create_timeline_semaphore(ggml_backend_vk_context*)’ defined but not used [-Wunused-function] 1966 | static vk_semaphore * ggml_vk_create_timeline_semaphore(ggml_backend_vk_context * ctx) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:1956:23: warning: ‘vk_semaphore* ggml_vk_create_binary_semaphore(ggml_backend_vk_context*)’ defined but not used [-Wunused-function] 1956 | static vk_semaphore * ggml_vk_create_binary_semaphore(ggml_backend_vk_context * ctx) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:88:13: warning: ‘bool is_pow2(uint32_t)’ defined but not used [-Wunused-function] 88 | static bool is_pow2(uint32_t x) { return x > 1 && (x & (x-1)) == 0; } | ^~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:63: /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:298:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 298 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:277:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 277 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:272:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 272 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:203:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 203 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:166:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 166 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:161:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 161 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:145:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 145 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 69%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/acc.comp.cpp.o [ 70%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/add.comp.cpp.o [ 70%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/add_id.comp.cpp.o [ 70%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/argmax.comp.cpp.o [ 70%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/argsort.comp.cpp.o [ 71%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/clamp.comp.cpp.o [ 71%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/concat.comp.cpp.o [ 71%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/contig_copy.comp.cpp.o [ 71%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/conv2d_dw.comp.cpp.o [ 72%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/conv2d_mm.comp.cpp.o [ 72%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/conv_transpose_1d.comp.cpp.o [ 72%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/copy.comp.cpp.o [ 73%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/copy_from_quant.comp.cpp.o [ 73%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/copy_to_quant.comp.cpp.o [ 73%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/cos.comp.cpp.o [ 73%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/count_equal.comp.cpp.o [ 74%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_f32.comp.cpp.o [ 74%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_iq1_m.comp.cpp.o [ 74%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_iq1_s.comp.cpp.o [ 74%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_iq2_s.comp.cpp.o [ 75%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_iq2_xs.comp.cpp.o [ 75%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_iq2_xxs.comp.cpp.o [ 75%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_iq3_s.comp.cpp.o [ 75%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_iq3_xxs.comp.cpp.o [ 76%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_iq4_nl.comp.cpp.o [ 76%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_iq4_xs.comp.cpp.o [ 76%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_mxfp4.comp.cpp.o [ 77%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q2_k.comp.cpp.o [ 77%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q3_k.comp.cpp.o [ 77%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q4_0.comp.cpp.o [ 77%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q4_1.comp.cpp.o [ 78%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q4_k.comp.cpp.o [ 78%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q5_0.comp.cpp.o [ 78%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q5_1.comp.cpp.o [ 78%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q5_k.comp.cpp.o [ 79%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q6_k.comp.cpp.o [ 79%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/dequant_q8_0.comp.cpp.o [ 79%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/diag_mask_inf.comp.cpp.o [ 79%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/div.comp.cpp.o [ 80%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/exp.comp.cpp.o [ 80%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/flash_attn.comp.cpp.o [ 80%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/flash_attn_cm1.comp.cpp.o [ 80%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/flash_attn_cm2.comp.cpp.o [ 81%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/flash_attn_split_k_reduce.comp.cpp.o [ 81%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/geglu.comp.cpp.o [ 81%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/geglu_erf.comp.cpp.o [ 82%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/geglu_quick.comp.cpp.o [ 82%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/gelu.comp.cpp.o [ 82%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/gelu_erf.comp.cpp.o [ 82%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/gelu_quick.comp.cpp.o [ 83%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/get_rows.comp.cpp.o [ 83%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/get_rows_quant.comp.cpp.o [ 83%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/group_norm.comp.cpp.o [ 83%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/hardsigmoid.comp.cpp.o [ 84%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/hardswish.comp.cpp.o [ 84%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/im2col.comp.cpp.o [ 84%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/im2col_3d.comp.cpp.o [ 84%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/l2_norm.comp.cpp.o [ 85%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/leaky_relu.comp.cpp.o [ 85%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul.comp.cpp.o [ 85%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_split_k_reduce.comp.cpp.o [ 86%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec.comp.cpp.o [ 86%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_iq1_m.comp.cpp.o [ 86%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_iq1_s.comp.cpp.o [ 86%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_iq2_s.comp.cpp.o [ 87%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_iq2_xs.comp.cpp.o [ 87%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_iq2_xxs.comp.cpp.o [ 87%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_iq3_s.comp.cpp.o [ 87%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_iq3_xxs.comp.cpp.o [ 88%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_nc.comp.cpp.o [ 88%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_p021.comp.cpp.o [ 88%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_q2_k.comp.cpp.o [ 88%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_q3_k.comp.cpp.o [ 89%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_q4_k.comp.cpp.o [ 89%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_q5_k.comp.cpp.o [ 89%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vec_q6_k.comp.cpp.o [ 90%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mat_vecq.comp.cpp.o [ 90%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mm.comp.cpp.o [ 90%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mm_cm2.comp.cpp.o [ 90%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/mul_mmq.comp.cpp.o [ 91%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/multi_add.comp.cpp.o [ 91%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/norm.comp.cpp.o [ 91%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/opt_step_adamw.comp.cpp.o [ 91%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/opt_step_sgd.comp.cpp.o [ 92%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/pad.comp.cpp.o [ 92%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/pool2d.comp.cpp.o [ 92%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/quantize_q8_1.comp.cpp.o [ 92%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/reglu.comp.cpp.o [ 93%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/relu.comp.cpp.o [ 93%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/repeat.comp.cpp.o [ 93%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/repeat_back.comp.cpp.o [ 93%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/rms_norm.comp.cpp.o [ 94%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/rms_norm_back.comp.cpp.o [ 94%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/rms_norm_partials.comp.cpp.o [ 94%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/roll.comp.cpp.o [ 95%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/rope_multi.comp.cpp.o [ 95%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/rope_neox.comp.cpp.o [ 95%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/rope_norm.comp.cpp.o [ 95%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/rope_vision.comp.cpp.o [ 96%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/scale.comp.cpp.o [ 96%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/sigmoid.comp.cpp.o [ 96%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/silu.comp.cpp.o [ 96%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/silu_back.comp.cpp.o [ 97%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/sin.comp.cpp.o [ 97%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/soft_max.comp.cpp.o [ 97%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/soft_max_back.comp.cpp.o [ 97%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/sqrt.comp.cpp.o [ 98%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/square.comp.cpp.o [ 98%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/sub.comp.cpp.o [ 98%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/sum_rows.comp.cpp.o [ 99%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/swiglu.comp.cpp.o [ 99%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/swiglu_oai.comp.cpp.o [ 99%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/tanh.comp.cpp.o [ 99%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/timestep_embedding.comp.cpp.o [100%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/upscale.comp.cpp.o [100%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/wkv6.comp.cpp.o [100%] Building CXX object ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/wkv7.comp.cpp.o [100%] Linking CXX shared module ../../../../../../lib/ollama/libggml-vulkan.so [100%] Built target ggml-vulkan + go build go: downloading github.com/spf13/cobra v1.7.0 go: downloading github.com/containerd/console v1.0.3 go: downloading github.com/mattn/go-runewidth v0.0.14 go: downloading github.com/olekukonko/tablewriter v0.0.5 go: downloading golang.org/x/crypto v0.36.0 go: downloading golang.org/x/sync v0.12.0 go: downloading golang.org/x/term v0.30.0 go: downloading github.com/rivo/uniseg v0.2.0 go: downloading github.com/google/uuid v1.6.0 go: downloading golang.org/x/text v0.23.0 go: downloading github.com/emirpasic/gods/v2 v2.0.0-alpha go: downloading github.com/gin-contrib/cors v1.7.2 go: downloading github.com/gin-gonic/gin v1.10.0 go: downloading golang.org/x/image v0.22.0 go: downloading golang.org/x/sys v0.31.0 go: downloading github.com/spf13/pflag v1.0.5 go: downloading github.com/d4l3k/go-bfloat16 v0.0.0-20211005043715-690c3bdd05f1 go: downloading github.com/nlpodyssey/gopickle v0.3.0 go: downloading github.com/pdevine/tensor v0.0.0-20240510204454-f88f4562727c go: downloading gonum.org/v1/gonum v0.15.0 go: downloading github.com/x448/float16 v0.8.4 go: downloading google.golang.org/protobuf v1.34.1 go: downloading github.com/agnivade/levenshtein v1.1.1 go: downloading github.com/gin-contrib/sse v0.1.0 go: downloading github.com/mattn/go-isatty v0.0.20 go: downloading golang.org/x/net v0.38.0 go: downloading github.com/dlclark/regexp2 v1.11.4 go: downloading github.com/apache/arrow/go/arrow v0.0.0-20211112161151-bc219186db40 go: downloading github.com/chewxy/hm v1.0.0 go: downloading github.com/google/flatbuffers v24.3.25+incompatible go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/chewxy/math32 v1.11.0 go: downloading go4.org/unsafe/assume-no-moving-gc v0.0.0-20231121144256-b99613f794b6 go: downloading gorgonia.org/vecf32 v0.9.0 go: downloading gorgonia.org/vecf64 v0.9.0 go: downloading github.com/go-playground/validator/v10 v10.20.0 go: downloading github.com/pelletier/go-toml/v2 v2.2.2 go: downloading github.com/ugorji/go/codec v1.2.12 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading golang.org/x/exp v0.0.0-20250218142911-aa4b98e5adaa go: downloading golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading github.com/golang/protobuf v1.5.4 go: downloading github.com/xtgo/set v1.0.0 go: downloading github.com/gabriel-vasile/mimetype v1.4.3 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/leodido/go-urn v1.4.0 go: downloading github.com/go-playground/locales v0.14.1 + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.DAiZ6o + umask 022 + cd /builddir/build/BUILD/ollama-0.12.6-build + '[' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT '!=' / ']' + rm -rf /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT ++ dirname /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT + mkdir -p /builddir/build/BUILD/ollama-0.12.6-build + mkdir /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + cd ollama-0.12.6 + install -Dm0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ollama /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/bin/ollama + install -Dm0644 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ollamad-main/ollamad.service /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib/systemd/system/ollamad.service + install -Dm0644 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ollamad-main/ollamad.conf /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/etc/ollama/ollamad.conf + install -Dm0644 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/ollamad-main/ollamad-ld.conf /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/etc/ld.so.conf.d/ollamad-ld.conf + install -d /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/var/lib/ollama + install -d /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama + install -m0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/lib/ollama/libggml-base.so /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/ + install -m0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/lib/ollama/libggml-cpu-alderlake.so /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/ + install -m0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/lib/ollama/libggml-cpu-haswell.so /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/ + install -m0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/lib/ollama/libggml-cpu-icelake.so /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/ + install -m0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/lib/ollama/libggml-cpu-sandybridge.so /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/ + install -m0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/lib/ollama/libggml-cpu-skylakex.so /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/ + install -m0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/lib/ollama/libggml-cpu-sse42.so /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/ + install -m0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/lib/ollama/libggml-cpu-x64.so /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/ + install -m0755 /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/lib/ollama/libggml-vulkan.so /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/ + for f in /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/*.so + patchelf --remove-rpath /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-base.so + patchelf --set-rpath '$ORIGIN' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-base.so + for f in /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/*.so + patchelf --remove-rpath /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so + patchelf --set-rpath '$ORIGIN' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-alderlake.so + for f in /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/*.so + patchelf --remove-rpath /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so + patchelf --set-rpath '$ORIGIN' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-haswell.so + for f in /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/*.so + patchelf --remove-rpath /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so + patchelf --set-rpath '$ORIGIN' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-icelake.so + for f in /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/*.so + patchelf --remove-rpath /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so + patchelf --set-rpath '$ORIGIN' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sandybridge.so + for f in /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/*.so + patchelf --remove-rpath /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so + patchelf --set-rpath '$ORIGIN' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-skylakex.so + for f in /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/*.so + patchelf --remove-rpath /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so + patchelf --set-rpath '$ORIGIN' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-sse42.so + for f in /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/*.so + patchelf --remove-rpath /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so + patchelf --set-rpath '$ORIGIN' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-cpu-x64.so + for f in /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/*.so + patchelf --remove-rpath /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-vulkan.so + patchelf --set-rpath '$ORIGIN' /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/libggml-vulkan.so + /usr/bin/find-debuginfo -j4 --strict-build-id -m -i --build-id-seed 0.12.6-1.fc42 --unique-debug-suffix -0.12.6-1.fc42.x86_64 --unique-debug-src-base ollama-0.12.6-1.fc42.x86_64 --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 -S debugsourcefiles.list /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6 find-debuginfo: starting Extracting debug info from 10 files objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.note.gnu.build-id' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.init' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.plt.sec' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.text' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.fini' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.gnu.hash' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.dynsym' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.gnu.version' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.gnu.version_r' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.rela.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.rela.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.relr.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.rodata' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.eh_frame_hdr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.eh_frame' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.gcc_except_table' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.note.gnu.property' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.note.package' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.init_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.fini_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.data.rel.ro' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.got' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.data' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.dynstr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stGcqxZC: warning: allocated section `.dynamic' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.note.gnu.build-id' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.init' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.plt.sec' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.text' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.fini' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.gnu.hash' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.dynsym' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.gnu.version' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.gnu.version_r' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.rela.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.rela.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.relr.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.rodata' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.eh_frame_hdr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.eh_frame' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.gcc_except_table' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.note.gnu.property' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.note.package' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.init_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.fini_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.data.rel.ro' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.got' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.data' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.dynstr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stA7mIIj: warning: allocated section `.dynamic' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.note.gnu.build-id' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.init' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.plt.sec' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.text' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.fini' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.gnu.hash' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.dynsym' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.gnu.version' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.gnu.version_r' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.rela.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.rela.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.relr.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.rodata' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.eh_frame_hdr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.eh_frame' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.gcc_except_table' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.note.gnu.property' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.note.package' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.init_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.fini_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.data.rel.ro' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.got' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.data' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.dynstr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stYsHkae: warning: allocated section `.dynamic' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.note.gnu.build-id' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.init' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.plt.sec' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.text' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.fini' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.gnu.hash' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.dynsym' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.gnu.version' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.gnu.version_r' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.rela.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.rela.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.relr.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.rodata' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.eh_frame_hdr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.eh_frame' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.gcc_except_table' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.note.gnu.property' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.note.package' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.init_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.fini_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.data.rel.ro' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.got' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.data' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.dynstr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stWqHhVN: warning: allocated section `.dynamic' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.note.gnu.build-id' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.init' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.plt.sec' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.text' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.fini' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.gnu.hash' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.dynsym' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.gnu.version' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.gnu.version_r' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.rela.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.rela.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.relr.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.rodata' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.eh_frame_hdr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.eh_frame' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.gcc_except_table' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.note.gnu.property' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.note.package' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.init_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.fini_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.data.rel.ro' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.got' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.data' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.dynstr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stt4Toc8: warning: allocated section `.dynamic' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.note.gnu.build-id' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.init' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.plt.sec' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.text' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.fini' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.gnu.hash' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.dynsym' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.gnu.version' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.gnu.version_r' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.rela.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.rela.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.relr.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.rodata' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.eh_frame_hdr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.eh_frame' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.gcc_except_table' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.note.gnu.property' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.note.package' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.init_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.fini_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.data.rel.ro' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.got' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.data' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.dynstr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/st3JpbrI: warning: allocated section `.dynamic' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.note.gnu.build-id' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.init' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.plt.sec' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.text' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.fini' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.gnu.hash' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.dynsym' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.gnu.version' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.gnu.version_r' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.rela.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.rela.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.relr.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.rodata' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.eh_frame_hdr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.eh_frame' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.gcc_except_table' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.note.gnu.property' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.note.package' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.init_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.fini_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.data.rel.ro' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.got' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.data' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.dynstr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stwqtNHp: warning: allocated section `.dynamic' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.note.gnu.build-id' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.init' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.plt.sec' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.text' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.fini' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.gnu.hash' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.dynsym' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.gnu.version' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.gnu.version_r' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.rela.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.rela.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.relr.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.rodata' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.eh_frame_hdr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.eh_frame' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.gcc_except_table' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.note.gnu.property' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.note.package' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.init_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.fini_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.data.rel.ro' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.got' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.data' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.dynstr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/sty6bBSu: warning: allocated section `.dynamic' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.note.gnu.build-id' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.init' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.plt.sec' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.text' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.fini' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.gnu.hash' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.dynsym' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.gnu.version' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.gnu.version_r' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.rela.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.rela.plt' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.relr.dyn' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.rodata' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.eh_frame_hdr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.eh_frame' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.gcc_except_table' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.note.gnu.property' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.note.package' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.init_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.fini_array' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.data.rel.ro' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.got' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.data' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.dynstr' not in segment objcopy: /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/lib64/ollama/stlbPTCf: warning: allocated section `.dynamic' not in segment warning: Unsupported auto-load script at offset 0 in section .debug_gdb_scripts of file /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/bin/ollama. Use `info auto-load python-scripts [REGEXP]' to list them. DWARF-compressing 10 files dwz: ./usr/bin/ollama-0.12.6-1.fc42.x86_64.debug: Found compressed .debug_aranges section, not attempting dwz compression dwz: ./usr/bin/ollama-0.12.6-1.fc42.x86_64.debug: Found compressed .debug_aranges section, not attempting dwz compression sepdebugcrcfix: Updated 9 CRC32s, 1 CRC32s did match. Creating .debug symlinks for symlinks to ELF files Copying sources found by 'debugedit -l' to /usr/src/debug/ollama-0.12.6-1.fc42.x86_64 find-debuginfo: done + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/redhat/brp-strip-lto /usr/bin/strip + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/check-rpaths + /usr/lib/rpm/redhat/brp-mangle-shebangs + /usr/lib/rpm/brp-remove-la-files + env /usr/lib/rpm/redhat/brp-python-bytecompile '' 1 0 -j4 + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/bin/add-determinism --brp -j4 /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT Scanned 114 directories and 449 files, processed 0 inodes, 0 modified (0 replaced + 0 rewritten), 0 unsupported format, 0 errors Reading /builddir/build/BUILD/ollama-0.12.6-build/SPECPARTS/rpm-debuginfo.specpart Processing files: ollama-0.12.6-1.fc42.x86_64 Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.yyKwzt + umask 022 + cd /builddir/build/BUILD/ollama-0.12.6-build + cd ollama-0.12.6 + DOCDIR=/builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/share/doc/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export DOCDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/README.md /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/share/doc/ollama + RPM_EC=0 ++ jobs -p + exit 0 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.mJoYEw + umask 022 + cd /builddir/build/BUILD/ollama-0.12.6-build + cd ollama-0.12.6 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/share/licenses/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/share/licenses/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.6-build/ollama-0.12.6/LICENSE /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT/usr/share/licenses/ollama + RPM_EC=0 ++ jobs -p + exit 0 Provides: config(ollama) = 0.12.6-1.fc42 libggml-base.so()(64bit) libggml-cpu-alderlake.so()(64bit) libggml-cpu-haswell.so()(64bit) libggml-cpu-icelake.so()(64bit) libggml-cpu-sandybridge.so()(64bit) libggml-cpu-skylakex.so()(64bit) libggml-cpu-sse42.so()(64bit) libggml-cpu-x64.so()(64bit) libggml-vulkan.so()(64bit) ollama = 0.12.6-1.fc42 ollama(x86-64) = 0.12.6-1.fc42 Requires(interp): /bin/sh /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(pre): /bin/sh /usr/bin/getent /usr/sbin/useradd Requires(post): /bin/sh Requires(preun): /bin/sh Requires(postun): /bin/sh Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.7)(64bit) libc.so.6(GLIBC_ABI_DT_RELR)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libgcc_s.so.1(GCC_3.4)(64bit) libggml-base.so()(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libresolv.so.2()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.11)(64bit) libstdc++.so.6(CXXABI_1.3.13)(64bit) libstdc++.so.6(CXXABI_1.3.15)(64bit) libstdc++.so.6(CXXABI_1.3.2)(64bit) libstdc++.so.6(CXXABI_1.3.3)(64bit) libstdc++.so.6(CXXABI_1.3.5)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.14)(64bit) libstdc++.so.6(GLIBCXX_3.4.15)(64bit) libstdc++.so.6(GLIBCXX_3.4.17)(64bit) libstdc++.so.6(GLIBCXX_3.4.18)(64bit) libstdc++.so.6(GLIBCXX_3.4.19)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) libstdc++.so.6(GLIBCXX_3.4.22)(64bit) libstdc++.so.6(GLIBCXX_3.4.25)(64bit) libstdc++.so.6(GLIBCXX_3.4.26)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) libstdc++.so.6(GLIBCXX_3.4.32)(64bit) libstdc++.so.6(GLIBCXX_3.4.9)(64bit) libvulkan.so.1()(64bit) rtld(GNU_HASH) Recommends: group(ollama) user(ollama) Processing files: ollama-debugsource-0.12.6-1.fc42.x86_64 Provides: ollama-debugsource = 0.12.6-1.fc42 ollama-debugsource(x86-64) = 0.12.6-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Processing files: ollama-debuginfo-0.12.6-1.fc42.x86_64 Provides: debuginfo(build-id) = 3bb4c442cb833f91bbed4a0e2b8b8eae53183c1b debuginfo(build-id) = 6d6d8a7d3f714fdf2aeb1b0f769e9817292ab051 debuginfo(build-id) = 6ee8612316c190e353c8101075dfb9b2cb8c7948 debuginfo(build-id) = 7f023d9cdc7c51cb8f2ff73cbc17d1a0d5e5e495 debuginfo(build-id) = 8aa3b8db308dd802c4d922f22c9b159b11e93d7b debuginfo(build-id) = be4498f994fbd1d3a737815e3720b084e0fcd168 debuginfo(build-id) = c6bd518a3789aecc491bb6782774736eac180bcd debuginfo(build-id) = cbe25a2aae6c203a350b939b7c589d1dbdfbab87 debuginfo(build-id) = d8631ee050046b78f7c5743037191a80d0d3ca60 debuginfo(build-id) = e3769fa87e9bbb2920d3fdd7662917b672b88bce libggml-base.so-0.12.6-1.fc42.x86_64.debug()(64bit) libggml-cpu-alderlake.so-0.12.6-1.fc42.x86_64.debug()(64bit) libggml-cpu-haswell.so-0.12.6-1.fc42.x86_64.debug()(64bit) libggml-cpu-icelake.so-0.12.6-1.fc42.x86_64.debug()(64bit) libggml-cpu-sandybridge.so-0.12.6-1.fc42.x86_64.debug()(64bit) libggml-cpu-skylakex.so-0.12.6-1.fc42.x86_64.debug()(64bit) libggml-cpu-sse42.so-0.12.6-1.fc42.x86_64.debug()(64bit) libggml-cpu-x64.so-0.12.6-1.fc42.x86_64.debug()(64bit) libggml-vulkan.so-0.12.6-1.fc42.x86_64.debug()(64bit) ollama-debuginfo = 0.12.6-1.fc42 ollama-debuginfo(x86-64) = 0.12.6-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.6-1.fc42 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILD/ollama-0.12.6-build/BUILDROOT error: Installed (but unpackaged) file(s) found: /etc/ld.so.conf.d/ollamad-ld.conf Macro expanded in comment on line 111: %{_libdir}/ollama/libggml-cuda.so Installed (but unpackaged) file(s) found: /etc/ld.so.conf.d/ollamad-ld.conf RPM build warnings: RPM build errors: Finish: rpmbuild ollama-0.12.6-1.fc42.src.rpm Finish: build phase for ollama-0.12.6-1.fc42.src.rpm INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-42-x86_64-1761505656.433794/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names ERROR: Exception(/var/lib/copr-rpmbuild/results/ollama-0.12.6-1.fc42.src.rpm) Config(fedora-42-x86_64) 11 minutes 32 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M fc03a5edb0ef414b814a3753002937b2 -D /var/lib/mock/fedora-42-x86_64-1761505656.433794/root -a -u mockbuild --capability=cap_ipc_lock --capability=cap_ipc_lock --bind=/tmp/mock-resolv.9w1png5_:/etc/resolv.conf --bind=/dev/btrfs-control --bind=/dev/mapper/control --bind=/dev/fuse --bind=/dev/loop-control --bind=/dev/loop0 --bind=/dev/loop1 --bind=/dev/loop2 --bind=/dev/loop3 --bind=/dev/loop4 --bind=/dev/loop5 --bind=/dev/loop6 --bind=/dev/loop7 --bind=/dev/loop8 --bind=/dev/loop9 --bind=/dev/loop10 --bind=/dev/loop11 --console=pipe --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin '--setenv=PROMPT_COMMAND=printf "\033]0;\007"' '--setenv=PS1= \s-\v\$ ' --setenv=LANG=C.UTF-8 --resolv-conf=off bash --login -c '/usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/originals/ollama.spec' Copr build error: Build failed