Warning: Permanently added '54.81.178.236' (ED25519) to the list of known hosts. You can reproduce this build on your computer by running: sudo dnf install copr-rpmbuild /usr/bin/copr-rpmbuild --verbose --drop-resultdir --task-url https://copr.fedorainfracloud.org/backend/get-build-task/9689276-fedora-42-x86_64 --chroot fedora-42-x86_64 Version: 1.6 PID: 23349 Logging PID: 23351 Task: {'allow_user_ssh': False, 'appstream': False, 'background': False, 'build_id': 9689276, 'buildroot_pkgs': ['cuda'], 'chroot': 'fedora-42-x86_64', 'enable_net': True, 'fedora_review': True, 'git_hash': '8067c5119620a1ce487d7c8984a43bbeec09f499', 'git_repo': 'https://copr-dist-git.fedorainfracloud.org/git/mwprado/ollama-cuda/ollama', 'isolation': 'default', 'memory_reqs': 2048, 'package_name': 'ollama', 'package_version': '0.12.5-1', 'project_dirname': 'ollama-cuda', 'project_name': 'ollama-cuda', 'project_owner': 'mwprado', 'repo_priority': None, 'repos': [{'baseurl': 'https://download.copr.fedorainfracloud.org/results/mwprado/ollama-cuda/fedora-42-x86_64/', 'id': 'copr_base', 'name': 'Copr repository', 'priority': None}, {'baseurl': 'https://developer.download.nvidia.com/compute/cuda/repos/fedora$releasever/$basearch/', 'id': 'https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch', 'name': 'Additional repo ' 'https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch'}], 'sandbox': 'mwprado/ollama-cuda--mwprado', 'source_json': {}, 'source_type': None, 'ssh_public_keys': None, 'storage': 0, 'submitter': 'mwprado', 'tags': [], 'task_id': '9689276-fedora-42-x86_64', 'timeout': 18000, 'uses_devel_repo': False, 'with_opts': [], 'without_opts': []} Running: git clone https://copr-dist-git.fedorainfracloud.org/git/mwprado/ollama-cuda/ollama /var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama --depth 500 --no-single-branch --recursive cmd: ['git', 'clone', 'https://copr-dist-git.fedorainfracloud.org/git/mwprado/ollama-cuda/ollama', '/var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama', '--depth', '500', '--no-single-branch', '--recursive'] cwd: . rc: 0 stdout: stderr: Cloning into '/var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama'... Running: git checkout 8067c5119620a1ce487d7c8984a43bbeec09f499 -- cmd: ['git', 'checkout', '8067c5119620a1ce487d7c8984a43bbeec09f499', '--'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama rc: 0 stdout: stderr: Note: switching to '8067c5119620a1ce487d7c8984a43bbeec09f499'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at 8067c51 automatic import of ollama Running: dist-git-client sources cmd: ['dist-git-client', 'sources'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama rc: 0 stdout: stderr: INFO: Reading stdout from command: git rev-parse --abbrev-ref HEAD INFO: Reading stdout from command: git rev-parse HEAD INFO: Reading sources specification file: sources INFO: Downloading main.zip INFO: Reading stdout from command: curl --help all INFO: Calling: curl -H Pragma: -o main.zip --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/mwprado/ollama-cuda/ollama/main.zip/md5/5f44a6a5e39be48a168ac851abb3bdf1/main.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4573 100 4573 0 0 469k 0 --:--:-- --:--:-- --:--:-- 496k INFO: Reading stdout from command: md5sum main.zip INFO: Downloading v0.12.5.zip INFO: Calling: curl -H Pragma: -o v0.12.5.zip --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/mwprado/ollama-cuda/ollama/v0.12.5.zip/md5/9ade9ff7f51e2daafce16888804ebb00/v0.12.5.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.9M 100 10.9M 0 0 405M 0 --:--:-- --:--:-- --:--:-- 420M INFO: Reading stdout from command: md5sum v0.12.5.zip tail: /var/lib/copr-rpmbuild/main.log: file truncated Running (timeout=18000): unbuffer mock --spec /var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1760489761.899953 -r /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: mock.py version 6.3 starting (python version = 3.13.7, NVR = mock-6.3-1.fc42), args: /usr/libexec/mock/mock --spec /var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1760489761.899953 -r /var/lib/copr-rpmbuild/results/configs/child.cfg Start(bootstrap): init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish(bootstrap): init plugins Start: init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish: init plugins INFO: Signal handler active Start: run INFO: Start(/var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama/ollama.spec) Config(fedora-42-x86_64) Start: clean chroot Finish: clean chroot Mock Version: 6.3 INFO: Mock Version: 6.3 Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1760489761.899953/root. INFO: calling preinit hooks INFO: enabled root cache INFO: /var/lib/copr-rpmbuild/results/configs/child.cfg newer than root cache; cache will be rebuilt INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata INFO: Guessed host environment type: unknown INFO: Using container image: registry.fedoraproject.org/fedora:42 INFO: Pulling image: registry.fedoraproject.org/fedora:42 INFO: Tagging container image as mock-bootstrap-1e8bae0a-0511-47d7-b724-c578de283520 INFO: Checking that 8a0322e5d6e2444b1e6d98ae8c674446ca1a5e9509ff763892a120ecd5fde717 image matches host's architecture INFO: Copy content of container 8a0322e5d6e2444b1e6d98ae8c674446ca1a5e9509ff763892a120ecd5fde717 to /var/lib/mock/fedora-42-x86_64-bootstrap-1760489761.899953/root INFO: mounting 8a0322e5d6e2444b1e6d98ae8c674446ca1a5e9509ff763892a120ecd5fde717 with podman image mount INFO: image 8a0322e5d6e2444b1e6d98ae8c674446ca1a5e9509ff763892a120ecd5fde717 as /var/lib/containers/storage/overlay/0d8af2c2504848bf6f0c2c1adca260e1eb53c02d7739b22de6e763122e75153f/merged INFO: umounting image 8a0322e5d6e2444b1e6d98ae8c674446ca1a5e9509ff763892a120ecd5fde717 (/var/lib/containers/storage/overlay/0d8af2c2504848bf6f0c2c1adca260e1eb53c02d7739b22de6e763122e75153f/merged) with podman image umount INFO: Removing image mock-bootstrap-1e8bae0a-0511-47d7-b724-c578de283520 INFO: Package manager dnf5 detected and used (fallback) INFO: Not updating bootstrap chroot, bootstrap_image_ready=True Start(bootstrap): creating root cache Finish(bootstrap): creating root cache Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-1760489761.899953/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start: Waiting for yumcache lock Finish: Waiting for yumcache lock Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Package manager dnf5 detected and used (direct choice) INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 dnf5-5.2.16.0-1.fc42.x86_64 dnf5-plugins-5.2.16.0-1.fc42.x86_64 Start: installing minimal buildroot with dnf5 Updating and loading repositories: Additional repo https_developer_downlo 100% | 80.3 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 34.3 KiB/s | 1.8 KiB | 00m00s fedora 100% | 46.2 KiB/s | 30.3 KiB | 00m01s updates 100% | 188.1 KiB/s | 29.0 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing group/module packages: bash x86_64 5.2.37-1.fc42 fedora 8.2 MiB bzip2 x86_64 1.0.8-20.fc42 fedora 99.3 KiB coreutils x86_64 9.6-6.fc42 updates 5.4 MiB cpio x86_64 2.15-4.fc42 fedora 1.1 MiB diffutils x86_64 3.12-1.fc42 updates 1.6 MiB fedora-release-common noarch 42-30 updates 20.2 KiB findutils x86_64 1:4.10.0-5.fc42 fedora 1.9 MiB gawk x86_64 5.3.1-1.fc42 fedora 1.7 MiB glibc-minimal-langpack x86_64 2.41-11.fc42 updates 0.0 B grep x86_64 3.11-10.fc42 fedora 1.0 MiB gzip x86_64 1.13-3.fc42 fedora 392.9 KiB info x86_64 7.2-3.fc42 fedora 357.9 KiB patch x86_64 2.8-1.fc42 updates 222.8 KiB redhat-rpm-config noarch 342-4.fc42 updates 185.5 KiB rpm-build x86_64 4.20.1-1.fc42 fedora 168.7 KiB sed x86_64 4.9-4.fc42 fedora 857.3 KiB shadow-utils x86_64 2:4.17.4-1.fc42 fedora 4.0 MiB tar x86_64 2:1.35-5.fc42 fedora 3.0 MiB unzip x86_64 6.0-66.fc42 fedora 390.3 KiB util-linux x86_64 2.40.4-7.fc42 fedora 3.4 MiB which x86_64 2.23-2.fc42 updates 83.5 KiB xz x86_64 1:5.8.1-2.fc42 updates 1.3 MiB Installing dependencies: add-determinism x86_64 0.6.0-1.fc42 fedora 2.5 MiB alternatives x86_64 1.33-1.fc42 updates 62.2 KiB ansible-srpm-macros noarch 1-17.1.fc42 fedora 35.7 KiB audit-libs x86_64 4.1.1-1.fc42 updates 378.8 KiB basesystem noarch 11-22.fc42 fedora 0.0 B binutils x86_64 2.44-6.fc42 updates 25.8 MiB build-reproducibility-srpm-macros noarch 0.6.0-1.fc42 fedora 735.0 B bzip2-libs x86_64 1.0.8-20.fc42 fedora 84.6 KiB ca-certificates noarch 2025.2.80_v9.0.304-1.0.fc42 updates 2.7 MiB coreutils-common x86_64 9.6-6.fc42 updates 11.1 MiB crypto-policies noarch 20250707-1.gitad370a8.fc42 updates 142.9 KiB curl x86_64 8.11.1-6.fc42 updates 450.6 KiB cyrus-sasl-lib x86_64 2.1.28-30.fc42 fedora 2.3 MiB debugedit x86_64 5.1-7.fc42 updates 192.7 KiB dwz x86_64 0.16-1.fc42 updates 287.1 KiB ed x86_64 1.21-2.fc42 fedora 146.5 KiB efi-srpm-macros noarch 6-3.fc42 updates 40.1 KiB elfutils x86_64 0.193-2.fc42 updates 2.9 MiB elfutils-debuginfod-client x86_64 0.193-2.fc42 updates 83.9 KiB elfutils-default-yama-scope noarch 0.193-2.fc42 updates 1.8 KiB elfutils-libelf x86_64 0.193-2.fc42 updates 1.2 MiB elfutils-libs x86_64 0.193-2.fc42 updates 683.4 KiB fedora-gpg-keys noarch 42-1 fedora 128.2 KiB fedora-release noarch 42-30 updates 0.0 B fedora-release-identity-basic noarch 42-30 updates 646.0 B fedora-repos noarch 42-1 fedora 4.9 KiB file x86_64 5.46-3.fc42 updates 100.2 KiB file-libs x86_64 5.46-3.fc42 updates 11.9 MiB filesystem x86_64 3.18-47.fc42 updates 112.0 B filesystem-srpm-macros noarch 3.18-47.fc42 updates 38.2 KiB fonts-srpm-macros noarch 1:2.0.5-22.fc42 updates 55.8 KiB forge-srpm-macros noarch 0.4.0-2.fc42 fedora 38.9 KiB fpc-srpm-macros noarch 1.3-14.fc42 fedora 144.0 B gdb-minimal x86_64 16.3-1.fc42 updates 13.2 MiB gdbm-libs x86_64 1:1.23-9.fc42 fedora 129.9 KiB ghc-srpm-macros noarch 1.9.2-2.fc42 fedora 779.0 B glibc x86_64 2.41-11.fc42 updates 6.6 MiB glibc-common x86_64 2.41-11.fc42 updates 1.0 MiB glibc-gconv-extra x86_64 2.41-11.fc42 updates 7.2 MiB gmp x86_64 1:6.3.0-4.fc42 fedora 811.3 KiB gnat-srpm-macros noarch 6-7.fc42 fedora 1.0 KiB gnulib-l10n noarch 20241231-1.fc42 updates 655.0 KiB go-srpm-macros noarch 3.8.0-1.fc42 updates 61.9 KiB jansson x86_64 2.14-2.fc42 fedora 93.1 KiB json-c x86_64 0.18-2.fc42 fedora 86.7 KiB kernel-srpm-macros noarch 1.0-25.fc42 fedora 1.9 KiB keyutils-libs x86_64 1.6.3-5.fc42 fedora 58.3 KiB krb5-libs x86_64 1.21.3-6.fc42 updates 2.3 MiB libacl x86_64 2.3.2-3.fc42 fedora 38.3 KiB libarchive x86_64 3.8.1-1.fc42 updates 955.2 KiB libattr x86_64 2.5.2-5.fc42 fedora 27.1 KiB libblkid x86_64 2.40.4-7.fc42 fedora 262.4 KiB libbrotli x86_64 1.1.0-6.fc42 fedora 841.3 KiB libcap x86_64 2.73-2.fc42 fedora 207.1 KiB libcap-ng x86_64 0.8.5-4.fc42 fedora 72.9 KiB libcom_err x86_64 1.47.2-3.fc42 fedora 67.1 KiB libcurl x86_64 8.11.1-6.fc42 updates 834.1 KiB libeconf x86_64 0.7.6-2.fc42 updates 64.6 KiB libevent x86_64 2.1.12-15.fc42 fedora 903.1 KiB libfdisk x86_64 2.40.4-7.fc42 fedora 372.3 KiB libffi x86_64 3.4.6-5.fc42 fedora 82.3 KiB libgcc x86_64 15.2.1-1.fc42 updates 266.6 KiB libgomp x86_64 15.2.1-1.fc42 updates 541.1 KiB libidn2 x86_64 2.3.8-1.fc42 fedora 556.5 KiB libmount x86_64 2.40.4-7.fc42 fedora 356.3 KiB libnghttp2 x86_64 1.64.0-3.fc42 fedora 170.4 KiB libpkgconf x86_64 2.3.0-2.fc42 fedora 78.1 KiB libpsl x86_64 0.21.5-5.fc42 fedora 76.4 KiB libselinux x86_64 3.8-3.fc42 updates 193.1 KiB libsemanage x86_64 3.8.1-2.fc42 updates 304.4 KiB libsepol x86_64 3.8-1.fc42 fedora 826.0 KiB libsmartcols x86_64 2.40.4-7.fc42 fedora 180.4 KiB libssh x86_64 0.11.3-1.fc42 updates 567.1 KiB libssh-config noarch 0.11.3-1.fc42 updates 277.0 B libstdc++ x86_64 15.2.1-1.fc42 updates 2.8 MiB libtasn1 x86_64 4.20.0-1.fc42 fedora 176.3 KiB libtool-ltdl x86_64 2.5.4-4.fc42 fedora 70.1 KiB libunistring x86_64 1.1-9.fc42 fedora 1.7 MiB libuuid x86_64 2.40.4-7.fc42 fedora 37.3 KiB libverto x86_64 0.3.2-10.fc42 fedora 25.4 KiB libxcrypt x86_64 4.4.38-7.fc42 updates 284.5 KiB libxml2 x86_64 2.12.10-1.fc42 fedora 1.7 MiB libzstd x86_64 1.5.7-1.fc42 fedora 807.8 KiB lua-libs x86_64 5.4.8-1.fc42 updates 280.8 KiB lua-srpm-macros noarch 1-15.fc42 fedora 1.3 KiB lz4-libs x86_64 1.10.0-2.fc42 fedora 157.4 KiB mpfr x86_64 4.2.2-1.fc42 fedora 828.8 KiB ncurses-base noarch 6.5-5.20250125.fc42 fedora 326.8 KiB ncurses-libs x86_64 6.5-5.20250125.fc42 fedora 946.3 KiB ocaml-srpm-macros noarch 10-4.fc42 fedora 1.9 KiB openblas-srpm-macros noarch 2-19.fc42 fedora 112.0 B openldap x86_64 2.6.10-1.fc42 updates 655.8 KiB openssl-libs x86_64 1:3.2.6-2.fc42 updates 7.8 MiB p11-kit x86_64 0.25.8-1.fc42 updates 2.3 MiB p11-kit-trust x86_64 0.25.8-1.fc42 updates 446.5 KiB package-notes-srpm-macros noarch 0.5-13.fc42 fedora 1.6 KiB pam-libs x86_64 1.7.0-6.fc42 updates 126.7 KiB pcre2 x86_64 10.45-1.fc42 fedora 697.7 KiB pcre2-syntax noarch 10.45-1.fc42 fedora 273.9 KiB perl-srpm-macros noarch 1-57.fc42 fedora 861.0 B pkgconf x86_64 2.3.0-2.fc42 fedora 88.5 KiB pkgconf-m4 noarch 2.3.0-2.fc42 fedora 14.4 KiB pkgconf-pkg-config x86_64 2.3.0-2.fc42 fedora 989.0 B popt x86_64 1.19-8.fc42 fedora 132.8 KiB publicsuffix-list-dafsa noarch 20250616-1.fc42 updates 69.1 KiB pyproject-srpm-macros noarch 1.18.4-1.fc42 updates 1.9 KiB python-srpm-macros noarch 3.13-5.fc42 updates 51.0 KiB qt5-srpm-macros noarch 5.15.17-1.fc42 updates 500.0 B qt6-srpm-macros noarch 6.9.2-1.fc42 updates 464.0 B readline x86_64 8.2-13.fc42 fedora 485.0 KiB rpm x86_64 4.20.1-1.fc42 fedora 3.1 MiB rpm-build-libs x86_64 4.20.1-1.fc42 fedora 206.6 KiB rpm-libs x86_64 4.20.1-1.fc42 fedora 721.8 KiB rpm-sequoia x86_64 1.7.0-5.fc42 fedora 2.4 MiB rust-srpm-macros noarch 26.4-1.fc42 updates 4.8 KiB setup noarch 2.15.0-13.fc42 fedora 720.9 KiB sqlite-libs x86_64 3.47.2-5.fc42 updates 1.5 MiB systemd-libs x86_64 257.9-2.fc42 updates 2.2 MiB systemd-standalone-sysusers x86_64 257.9-2.fc42 updates 277.3 KiB tree-sitter-srpm-macros noarch 0.1.0-8.fc42 fedora 6.5 KiB util-linux-core x86_64 2.40.4-7.fc42 fedora 1.4 MiB xxhash-libs x86_64 0.8.3-2.fc42 fedora 90.2 KiB xz-libs x86_64 1:5.8.1-2.fc42 updates 217.8 KiB zig-srpm-macros noarch 1-4.fc42 fedora 1.1 KiB zip x86_64 3.0-43.fc42 fedora 698.5 KiB zlib-ng-compat x86_64 2.2.5-2.fc42 updates 137.6 KiB zstd x86_64 1.5.7-1.fc42 fedora 1.7 MiB Installing groups: Buildsystem building group Transaction Summary: Installing: 149 packages Total size of inbound packages is 52 MiB. Need to download 0 B. After this operation, 178 MiB extra will be used (install 178 MiB, remove 0 B). [ 1/149] tar-2:1.35-5.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 2/149] bzip2-0:1.0.8-20.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 3/149] rpm-build-0:4.20.1-1.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 4/149] unzip-0:6.0-66.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 5/149] cpio-0:2.15-4.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 6/149] bash-0:5.2.37-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 7/149] grep-0:3.11-10.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 8/149] sed-0:4.9-4.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 9/149] shadow-utils-2:4.17.4-1.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 10/149] findutils-1:4.10.0-5.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 11/149] gzip-0:1.13-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 12/149] info-0:7.2-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 13/149] redhat-rpm-config-0:342-4.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 14/149] which-0:2.23-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 15/149] coreutils-0:9.6-6.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 16/149] patch-0:2.8-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 17/149] util-linux-0:2.40.4-7.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 18/149] diffutils-0:3.12-1.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 19/149] fedora-release-common-0:42-30 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 20/149] gawk-0:5.3.1-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 21/149] glibc-minimal-langpack-0:2.41 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 22/149] xz-1:5.8.1-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 23/149] libacl-0:2.3.2-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 24/149] bzip2-libs-0:1.0.8-20.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 25/149] popt-0:1.19-8.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 26/149] readline-0:8.2-13.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 27/149] rpm-0:4.20.1-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 28/149] rpm-build-libs-0:4.20.1-1.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 29/149] rpm-libs-0:4.20.1-1.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 30/149] zstd-0:1.5.7-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 31/149] ncurses-libs-0:6.5-5.20250125 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 32/149] pcre2-0:10.45-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 33/149] setup-0:2.15.0-13.fc42.noarch 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 34/149] ansible-srpm-macros-0:1-17.1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 35/149] build-reproducibility-srpm-ma 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 36/149] forge-srpm-macros-0:0.4.0-2.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 37/149] fpc-srpm-macros-0:1.3-14.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 38/149] ghc-srpm-macros-0:1.9.2-2.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 39/149] gnat-srpm-macros-0:6-7.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 40/149] kernel-srpm-macros-0:1.0-25.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 41/149] lua-srpm-macros-0:1-15.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 42/149] ocaml-srpm-macros-0:10-4.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 43/149] openblas-srpm-macros-0:2-19.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 44/149] package-notes-srpm-macros-0:0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 45/149] perl-srpm-macros-0:1-57.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 46/149] tree-sitter-srpm-macros-0:0.1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 47/149] zig-srpm-macros-0:1-4.fc42.no 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 48/149] zip-0:3.0-43.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 49/149] gmp-1:6.3.0-4.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 50/149] libattr-0:2.5.2-5.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 51/149] libcap-0:2.73-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 52/149] coreutils-common-0:9.6-6.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 53/149] ed-0:1.21-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 54/149] libblkid-0:2.40.4-7.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 55/149] libcap-ng-0:0.8.5-4.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 56/149] libfdisk-0:2.40.4-7.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 57/149] libmount-0:2.40.4-7.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 58/149] libsmartcols-0:2.40.4-7.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 59/149] libuuid-0:2.40.4-7.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 60/149] util-linux-core-0:2.40.4-7.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 61/149] fedora-repos-0:42-1.noarch 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 62/149] mpfr-0:4.2.2-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 63/149] glibc-common-0:2.41-11.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 64/149] xz-libs-1:5.8.1-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 65/149] libzstd-0:1.5.7-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 66/149] rpm-sequoia-0:1.7.0-5.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 67/149] lz4-libs-0:1.10.0-2.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 68/149] ncurses-base-0:6.5-5.20250125 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 69/149] pcre2-syntax-0:10.45-1.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 70/149] add-determinism-0:0.6.0-1.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 71/149] gnulib-l10n-0:20241231-1.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 72/149] fedora-gpg-keys-0:42-1.noarch 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 73/149] glibc-0:2.41-11.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 74/149] glibc-gconv-extra-0:2.41-11.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 75/149] basesystem-0:11-22.fc42.noarc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 76/149] dwz-0:0.16-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 77/149] efi-srpm-macros-0:6-3.fc42.no 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 78/149] file-0:5.46-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 79/149] file-libs-0:5.46-3.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 80/149] filesystem-srpm-macros-0:3.18 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 81/149] fonts-srpm-macros-1:2.0.5-22. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 82/149] go-srpm-macros-0:3.8.0-1.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 83/149] pyproject-srpm-macros-0:1.18. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 84/149] python-srpm-macros-0:3.13-5.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 85/149] qt5-srpm-macros-0:5.15.17-1.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 86/149] qt6-srpm-macros-0:6.9.2-1.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 87/149] rust-srpm-macros-0:26.4-1.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 88/149] filesystem-0:3.18-47.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 89/149] libgcc-0:15.2.1-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 90/149] zlib-ng-compat-0:2.2.5-2.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 91/149] elfutils-libelf-0:0.193-2.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 92/149] elfutils-libs-0:0.193-2.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 93/149] elfutils-0:0.193-2.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 94/149] elfutils-debuginfod-client-0: 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 95/149] json-c-0:0.18-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 96/149] libselinux-0:3.8-3.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 97/149] libsepol-0:3.8-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 98/149] systemd-libs-0:257.9-2.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 99/149] libstdc++-0:15.2.1-1.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [100/149] libxcrypt-0:4.4.38-7.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [101/149] audit-libs-0:4.1.1-1.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [102/149] pam-libs-0:1.7.0-6.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [103/149] libeconf-0:0.7.6-2.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [104/149] libsemanage-0:3.8.1-2.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [105/149] openssl-libs-1:3.2.6-2.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [106/149] lua-libs-0:5.4.8-1.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [107/149] sqlite-libs-0:3.47.2-5.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [108/149] libgomp-0:15.2.1-1.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [109/149] binutils-0:2.44-6.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [110/149] jansson-0:2.14-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [111/149] debugedit-0:5.1-7.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [112/149] libarchive-0:3.8.1-1.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [113/149] libxml2-0:2.12.10-1.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [114/149] pkgconf-pkg-config-0:2.3.0-2. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [115/149] pkgconf-0:2.3.0-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [116/149] pkgconf-m4-0:2.3.0-2.fc42.noa 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [117/149] libpkgconf-0:2.3.0-2.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [118/149] curl-0:8.11.1-6.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [119/149] ca-certificates-0:2025.2.80_v 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [120/149] crypto-policies-0:20250707-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [121/149] elfutils-default-yama-scope-0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [122/149] libffi-0:3.4.6-5.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [123/149] p11-kit-0:0.25.8-1.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [124/149] libtasn1-0:4.20.0-1.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [125/149] p11-kit-trust-0:0.25.8-1.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [126/149] alternatives-0:1.33-1.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [127/149] fedora-release-0:42-30.noarch 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [128/149] fedora-release-identity-basic 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [129/149] libcurl-0:8.11.1-6.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [130/149] libbrotli-0:1.1.0-6.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [131/149] libidn2-0:2.3.8-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [132/149] libnghttp2-0:1.64.0-3.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [133/149] libpsl-0:0.21.5-5.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [134/149] libssh-0:0.11.3-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [135/149] libunistring-0:1.1-9.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [136/149] libssh-config-0:0.11.3-1.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [137/149] gdb-minimal-0:16.3-1.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [138/149] xxhash-libs-0:0.8.3-2.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [139/149] systemd-standalone-sysusers-0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [140/149] publicsuffix-list-dafsa-0:202 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [141/149] krb5-libs-0:1.21.3-6.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [142/149] keyutils-libs-0:1.6.3-5.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [143/149] libcom_err-0:1.47.2-3.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [144/149] libverto-0:0.3.2-10.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [145/149] openldap-0:2.6.10-1.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [146/149] cyrus-sasl-lib-0:2.1.28-30.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [147/149] libevent-0:2.1.12-15.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [148/149] libtool-ltdl-0:2.5.4-4.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [149/149] gdbm-libs-1:1.23-9.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded -------------------------------------------------------------------------------- [149/149] Total 100% | 0.0 B/s | 0.0 B | 00m00s Running transaction Importing OpenPGP key 0x105EF944: UserID : "Fedora (42) " Fingerprint: B0F4950458F69E1150C6C5EDC8AC4916105EF944 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-42-primary The key was successfully imported. [ 1/151] Verify package files 100% | 642.0 B/s | 149.0 B | 00m00s [ 2/151] Prepare transaction 100% | 2.5 KiB/s | 149.0 B | 00m00s [ 3/151] Installing libgcc-0:15.2.1-1. 100% | 29.1 MiB/s | 268.2 KiB | 00m00s [ 4/151] Installing publicsuffix-list- 100% | 68.2 MiB/s | 69.8 KiB | 00m00s [ 5/151] Installing libssh-config-0:0. 100% | 0.0 B/s | 816.0 B | 00m00s [ 6/151] Installing fedora-release-ide 100% | 882.8 KiB/s | 904.0 B | 00m00s [ 7/151] Installing fedora-gpg-keys-0: 100% | 28.4 MiB/s | 174.8 KiB | 00m00s [ 8/151] Installing fedora-repos-0:42- 100% | 0.0 B/s | 5.7 KiB | 00m00s [ 9/151] Installing fedora-release-com 100% | 23.9 MiB/s | 24.5 KiB | 00m00s [ 10/151] Installing fedora-release-0:4 100% | 779.0 B/s | 124.0 B | 00m00s >>> Running sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Scriptlet output: >>> Creating group 'adm' with GID 4. >>> Creating group 'audio' with GID 63. >>> Creating group 'bin' with GID 1. >>> Creating group 'cdrom' with GID 11. >>> Creating group 'clock' with GID 103. >>> Creating group 'daemon' with GID 2. >>> Creating group 'dialout' with GID 18. >>> Creating group 'disk' with GID 6. >>> Creating group 'floppy' with GID 19. >>> Creating group 'ftp' with GID 50. >>> Creating group 'games' with GID 20. >>> Creating group 'input' with GID 104. >>> Creating group 'kmem' with GID 9. >>> Creating group 'kvm' with GID 36. >>> Creating group 'lock' with GID 54. >>> Creating group 'lp' with GID 7. >>> Creating group 'mail' with GID 12. >>> Creating group 'man' with GID 15. >>> Creating group 'mem' with GID 8. >>> Creating group 'nobody' with GID 65534. >>> Creating group 'render' with GID 105. >>> Creating group 'root' with GID 0. >>> Creating group 'sgx' with GID 106. >>> Creating group 'sys' with GID 3. >>> Creating group 'tape' with GID 33. >>> Creating group 'tty' with GID 5. >>> Creating group 'users' with GID 100. >>> Creating group 'utmp' with GID 22. >>> Creating group 'video' with GID 39. >>> Creating group 'wheel' with GID 10. >>> >>> Running sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Scriptlet output: >>> Creating user 'adm' (adm) with UID 3 and GID 4. >>> Creating user 'bin' (bin) with UID 1 and GID 1. >>> Creating user 'daemon' (daemon) with UID 2 and GID 2. >>> Creating user 'ftp' (FTP User) with UID 14 and GID 50. >>> Creating user 'games' (games) with UID 12 and GID 20. >>> Creating user 'halt' (halt) with UID 7 and GID 0. >>> Creating user 'lp' (lp) with UID 4 and GID 7. >>> Creating user 'mail' (mail) with UID 8 and GID 12. >>> Creating user 'nobody' (Kernel Overflow User) with UID 65534 and GID 65534. >>> Creating user 'operator' (operator) with UID 11 and GID 0. >>> Creating user 'root' (Super User) with UID 0 and GID 0. >>> Creating user 'shutdown' (shutdown) with UID 6 and GID 0. >>> Creating user 'sync' (sync) with UID 5 and GID 0. >>> [ 11/151] Installing setup-0:2.15.0-13. 100% | 29.6 MiB/s | 726.7 KiB | 00m00s [ 12/151] Installing filesystem-0:3.18- 100% | 1.8 MiB/s | 212.8 KiB | 00m00s [ 13/151] Installing basesystem-0:11-22 100% | 0.0 B/s | 124.0 B | 00m00s [ 14/151] Installing pkgconf-m4-0:2.3.0 100% | 0.0 B/s | 14.8 KiB | 00m00s [ 15/151] Installing rust-srpm-macros-0 100% | 0.0 B/s | 5.6 KiB | 00m00s [ 16/151] Installing qt6-srpm-macros-0: 100% | 0.0 B/s | 740.0 B | 00m00s [ 17/151] Installing qt5-srpm-macros-0: 100% | 0.0 B/s | 776.0 B | 00m00s [ 18/151] Installing gnulib-l10n-0:2024 100% | 129.3 MiB/s | 661.9 KiB | 00m00s [ 19/151] Installing coreutils-common-0 100% | 272.0 MiB/s | 11.2 MiB | 00m00s [ 20/151] Installing pcre2-syntax-0:10. 100% | 135.0 MiB/s | 276.4 KiB | 00m00s [ 21/151] Installing ncurses-base-0:6.5 100% | 57.3 MiB/s | 352.2 KiB | 00m00s [ 22/151] Installing glibc-minimal-lang 100% | 0.0 B/s | 124.0 B | 00m00s [ 23/151] Installing ncurses-libs-0:6.5 100% | 155.1 MiB/s | 952.8 KiB | 00m00s [ 24/151] Installing glibc-0:2.41-11.fc 100% | 141.5 MiB/s | 6.7 MiB | 00m00s [ 25/151] Installing bash-0:5.2.37-1.fc 100% | 194.5 MiB/s | 8.2 MiB | 00m00s [ 26/151] Installing glibc-common-0:2.4 100% | 44.4 MiB/s | 1.0 MiB | 00m00s [ 27/151] Installing glibc-gconv-extra- 100% | 182.7 MiB/s | 7.3 MiB | 00m00s [ 28/151] Installing zlib-ng-compat-0:2 100% | 135.2 MiB/s | 138.4 KiB | 00m00s [ 29/151] Installing bzip2-libs-0:1.0.8 100% | 83.7 MiB/s | 85.7 KiB | 00m00s [ 30/151] Installing xz-libs-1:5.8.1-2. 100% | 213.8 MiB/s | 218.9 KiB | 00m00s [ 31/151] Installing libuuid-0:2.40.4-7 100% | 37.5 MiB/s | 38.4 KiB | 00m00s [ 32/151] Installing libblkid-0:2.40.4- 100% | 128.7 MiB/s | 263.5 KiB | 00m00s [ 33/151] Installing popt-0:1.19-8.fc42 100% | 45.4 MiB/s | 139.4 KiB | 00m00s [ 34/151] Installing readline-0:8.2-13. 100% | 237.9 MiB/s | 487.1 KiB | 00m00s [ 35/151] Installing gmp-1:6.3.0-4.fc42 100% | 198.6 MiB/s | 813.5 KiB | 00m00s [ 36/151] Installing libzstd-0:1.5.7-1. 100% | 263.4 MiB/s | 809.1 KiB | 00m00s [ 37/151] Installing elfutils-libelf-0: 100% | 233.3 MiB/s | 1.2 MiB | 00m00s [ 38/151] Installing libstdc++-0:15.2.1 100% | 257.8 MiB/s | 2.8 MiB | 00m00s [ 39/151] Installing libxcrypt-0:4.4.38 100% | 140.2 MiB/s | 287.2 KiB | 00m00s [ 40/151] Installing libattr-0:2.5.2-5. 100% | 27.4 MiB/s | 28.1 KiB | 00m00s [ 41/151] Installing libacl-0:2.3.2-3.f 100% | 38.2 MiB/s | 39.2 KiB | 00m00s [ 42/151] Installing dwz-0:0.16-1.fc42. 100% | 16.6 MiB/s | 288.5 KiB | 00m00s [ 43/151] Installing mpfr-0:4.2.2-1.fc4 100% | 162.2 MiB/s | 830.4 KiB | 00m00s [ 44/151] Installing gawk-0:5.3.1-1.fc4 100% | 70.6 MiB/s | 1.7 MiB | 00m00s [ 45/151] Installing unzip-0:6.0-66.fc4 100% | 22.6 MiB/s | 393.8 KiB | 00m00s [ 46/151] Installing file-libs-0:5.46-3 100% | 539.0 MiB/s | 11.9 MiB | 00m00s [ 47/151] Installing file-0:5.46-3.fc42 100% | 4.0 MiB/s | 101.7 KiB | 00m00s [ 48/151] Installing crypto-policies-0: 100% | 23.4 MiB/s | 167.8 KiB | 00m00s [ 49/151] Installing pcre2-0:10.45-1.fc 100% | 170.7 MiB/s | 699.1 KiB | 00m00s [ 50/151] Installing grep-0:3.11-10.fc4 100% | 43.6 MiB/s | 1.0 MiB | 00m00s [ 51/151] Installing xz-1:5.8.1-2.fc42. 100% | 57.9 MiB/s | 1.3 MiB | 00m00s [ 52/151] Installing libcap-ng-0:0.8.5- 100% | 73.1 MiB/s | 74.8 KiB | 00m00s [ 53/151] Installing audit-libs-0:4.1.1 100% | 124.2 MiB/s | 381.5 KiB | 00m00s [ 54/151] Installing libsmartcols-0:2.4 100% | 177.3 MiB/s | 181.5 KiB | 00m00s [ 55/151] Installing lz4-libs-0:1.10.0- 100% | 154.7 MiB/s | 158.5 KiB | 00m00s [ 56/151] Installing libsepol-0:3.8-1.f 100% | 201.9 MiB/s | 827.0 KiB | 00m00s [ 57/151] Installing libselinux-0:3.8-3 100% | 94.9 MiB/s | 194.3 KiB | 00m00s [ 58/151] Installing sed-0:4.9-4.fc42.x 100% | 40.2 MiB/s | 865.5 KiB | 00m00s [ 59/151] Installing findutils-1:4.10.0 100% | 81.5 MiB/s | 1.9 MiB | 00m00s [ 60/151] Installing libmount-0:2.40.4- 100% | 174.5 MiB/s | 357.3 KiB | 00m00s [ 61/151] Installing libeconf-0:0.7.6-2 100% | 64.7 MiB/s | 66.2 KiB | 00m00s [ 62/151] Installing pam-libs-0:1.7.0-6 100% | 63.0 MiB/s | 129.1 KiB | 00m00s [ 63/151] Installing libcap-0:2.73-2.fc 100% | 12.2 MiB/s | 212.1 KiB | 00m00s [ 64/151] Installing systemd-libs-0:257 100% | 223.2 MiB/s | 2.2 MiB | 00m00s [ 65/151] Installing lua-libs-0:5.4.8-1 100% | 137.7 MiB/s | 282.0 KiB | 00m00s [ 66/151] Installing libffi-0:3.4.6-5.f 100% | 81.7 MiB/s | 83.7 KiB | 00m00s [ 67/151] Installing libtasn1-0:4.20.0- 100% | 87.0 MiB/s | 178.1 KiB | 00m00s [ 68/151] Installing p11-kit-0:0.25.8-1 100% | 84.8 MiB/s | 2.3 MiB | 00m00s [ 69/151] Installing alternatives-0:1.3 100% | 3.9 MiB/s | 63.8 KiB | 00m00s [ 70/151] Installing libunistring-0:1.1 100% | 246.7 MiB/s | 1.7 MiB | 00m00s [ 71/151] Installing libidn2-0:2.3.8-1. 100% | 109.9 MiB/s | 562.7 KiB | 00m00s [ 72/151] Installing libpsl-0:0.21.5-5. 100% | 75.7 MiB/s | 77.5 KiB | 00m00s [ 73/151] Installing p11-kit-trust-0:0. 100% | 16.2 MiB/s | 448.3 KiB | 00m00s [ 74/151] Installing openssl-libs-1:3.2 100% | 252.4 MiB/s | 7.8 MiB | 00m00s [ 75/151] Installing coreutils-0:9.6-6. 100% | 121.2 MiB/s | 5.5 MiB | 00m00s [ 76/151] Installing ca-certificates-0: 100% | 1.1 MiB/s | 2.5 MiB | 00m02s [ 77/151] Installing gzip-0:1.13-3.fc42 100% | 21.6 MiB/s | 398.4 KiB | 00m00s [ 78/151] Installing rpm-sequoia-0:1.7. 100% | 241.4 MiB/s | 2.4 MiB | 00m00s [ 79/151] Installing libevent-0:2.1.12- 100% | 177.1 MiB/s | 906.9 KiB | 00m00s [ 80/151] Installing util-linux-core-0: 100% | 57.1 MiB/s | 1.4 MiB | 00m00s [ 81/151] Installing systemd-standalone 100% | 16.0 MiB/s | 277.8 KiB | 00m00s [ 82/151] Installing tar-2:1.35-5.fc42. 100% | 105.8 MiB/s | 3.0 MiB | 00m00s [ 83/151] Installing libsemanage-0:3.8. 100% | 74.7 MiB/s | 306.2 KiB | 00m00s [ 84/151] Installing shadow-utils-2:4.1 100% | 91.9 MiB/s | 4.0 MiB | 00m00s [ 85/151] Installing zstd-0:1.5.7-1.fc4 100% | 77.7 MiB/s | 1.7 MiB | 00m00s [ 86/151] Installing zip-0:3.0-43.fc42. 100% | 36.1 MiB/s | 702.4 KiB | 00m00s [ 87/151] Installing libfdisk-0:2.40.4- 100% | 182.3 MiB/s | 373.4 KiB | 00m00s [ 88/151] Installing libxml2-0:2.12.10- 100% | 73.8 MiB/s | 1.7 MiB | 00m00s [ 89/151] Installing libarchive-0:3.8.1 100% | 186.9 MiB/s | 957.1 KiB | 00m00s [ 90/151] Installing bzip2-0:1.0.8-20.f 100% | 6.0 MiB/s | 103.8 KiB | 00m00s [ 91/151] Installing add-determinism-0: 100% | 98.6 MiB/s | 2.5 MiB | 00m00s [ 92/151] Installing build-reproducibil 100% | 0.0 B/s | 1.0 KiB | 00m00s [ 93/151] Installing sqlite-libs-0:3.47 100% | 216.1 MiB/s | 1.5 MiB | 00m00s [ 94/151] Installing rpm-libs-0:4.20.1- 100% | 176.6 MiB/s | 723.4 KiB | 00m00s [ 95/151] Installing ed-0:1.21-2.fc42.x 100% | 8.5 MiB/s | 148.8 KiB | 00m00s [ 96/151] Installing patch-0:2.8-1.fc42 100% | 12.9 MiB/s | 224.3 KiB | 00m00s [ 97/151] Installing filesystem-srpm-ma 100% | 38.0 MiB/s | 38.9 KiB | 00m00s [ 98/151] Installing elfutils-default-y 100% | 255.4 KiB/s | 2.0 KiB | 00m00s [ 99/151] Installing elfutils-libs-0:0. 100% | 167.3 MiB/s | 685.2 KiB | 00m00s [100/151] Installing cpio-0:2.15-4.fc42 100% | 47.8 MiB/s | 1.1 MiB | 00m00s [101/151] Installing diffutils-0:3.12-1 100% | 67.9 MiB/s | 1.6 MiB | 00m00s [102/151] Installing json-c-0:0.18-2.fc 100% | 85.9 MiB/s | 88.0 KiB | 00m00s [103/151] Installing libgomp-0:15.2.1-1 100% | 176.6 MiB/s | 542.5 KiB | 00m00s [104/151] Installing rpm-build-libs-0:4 100% | 101.3 MiB/s | 207.4 KiB | 00m00s [105/151] Installing jansson-0:2.14-2.f 100% | 92.2 MiB/s | 94.4 KiB | 00m00s [106/151] Installing libpkgconf-0:2.3.0 100% | 77.4 MiB/s | 79.2 KiB | 00m00s [107/151] Installing pkgconf-0:2.3.0-2. 100% | 5.6 MiB/s | 91.0 KiB | 00m00s [108/151] Installing pkgconf-pkg-config 100% | 110.8 KiB/s | 1.8 KiB | 00m00s [109/151] Installing libbrotli-0:1.1.0- 100% | 205.9 MiB/s | 843.6 KiB | 00m00s [110/151] Installing libnghttp2-0:1.64. 100% | 83.7 MiB/s | 171.5 KiB | 00m00s [111/151] Installing xxhash-libs-0:0.8. 100% | 89.4 MiB/s | 91.6 KiB | 00m00s [112/151] Installing keyutils-libs-0:1. 100% | 58.3 MiB/s | 59.7 KiB | 00m00s [113/151] Installing libcom_err-0:1.47. 100% | 66.6 MiB/s | 68.2 KiB | 00m00s [114/151] Installing libverto-0:0.3.2-1 100% | 26.6 MiB/s | 27.2 KiB | 00m00s [115/151] Installing krb5-libs-0:1.21.3 100% | 208.3 MiB/s | 2.3 MiB | 00m00s [116/151] Installing libssh-0:0.11.3-1. 100% | 185.3 MiB/s | 569.2 KiB | 00m00s [117/151] Installing libtool-ltdl-0:2.5 100% | 69.6 MiB/s | 71.2 KiB | 00m00s [118/151] Installing gdbm-libs-1:1.23-9 100% | 64.2 MiB/s | 131.6 KiB | 00m00s [119/151] Installing cyrus-sasl-lib-0:2 100% | 92.2 MiB/s | 2.3 MiB | 00m00s [120/151] Installing openldap-0:2.6.10- 100% | 161.0 MiB/s | 659.6 KiB | 00m00s [121/151] Installing libcurl-0:8.11.1-6 100% | 203.9 MiB/s | 835.2 KiB | 00m00s [122/151] Installing elfutils-debuginfo 100% | 5.0 MiB/s | 86.2 KiB | 00m00s [123/151] Installing elfutils-0:0.193-2 100% | 112.4 MiB/s | 2.9 MiB | 00m00s [124/151] Installing binutils-0:2.44-6. 100% | 215.3 MiB/s | 25.8 MiB | 00m00s [125/151] Installing gdb-minimal-0:16.3 100% | 207.0 MiB/s | 13.2 MiB | 00m00s [126/151] Installing debugedit-0:5.1-7. 100% | 11.2 MiB/s | 195.4 KiB | 00m00s [127/151] Installing curl-0:8.11.1-6.fc 100% | 14.8 MiB/s | 453.1 KiB | 00m00s [128/151] Installing rpm-0:4.20.1-1.fc4 100% | 60.9 MiB/s | 2.5 MiB | 00m00s [129/151] Installing lua-srpm-macros-0: 100% | 1.9 MiB/s | 1.9 KiB | 00m00s [130/151] Installing tree-sitter-srpm-m 100% | 7.2 MiB/s | 7.4 KiB | 00m00s [131/151] Installing zig-srpm-macros-0: 100% | 0.0 B/s | 1.7 KiB | 00m00s [132/151] Installing efi-srpm-macros-0: 100% | 40.2 MiB/s | 41.1 KiB | 00m00s [133/151] Installing perl-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [134/151] Installing package-notes-srpm 100% | 0.0 B/s | 2.0 KiB | 00m00s [135/151] Installing openblas-srpm-macr 100% | 0.0 B/s | 392.0 B | 00m00s [136/151] Installing ocaml-srpm-macros- 100% | 0.0 B/s | 2.2 KiB | 00m00s [137/151] Installing kernel-srpm-macros 100% | 0.0 B/s | 2.3 KiB | 00m00s [138/151] Installing gnat-srpm-macros-0 100% | 0.0 B/s | 1.3 KiB | 00m00s [139/151] Installing ghc-srpm-macros-0: 100% | 0.0 B/s | 1.0 KiB | 00m00s [140/151] Installing fpc-srpm-macros-0: 100% | 0.0 B/s | 420.0 B | 00m00s [141/151] Installing ansible-srpm-macro 100% | 35.4 MiB/s | 36.2 KiB | 00m00s [142/151] Installing forge-srpm-macros- 100% | 39.3 MiB/s | 40.3 KiB | 00m00s [143/151] Installing fonts-srpm-macros- 100% | 55.7 MiB/s | 57.0 KiB | 00m00s [144/151] Installing go-srpm-macros-0:3 100% | 61.6 MiB/s | 63.0 KiB | 00m00s [145/151] Installing python-srpm-macros 100% | 50.9 MiB/s | 52.2 KiB | 00m00s [146/151] Installing redhat-rpm-config- 100% | 62.6 MiB/s | 192.2 KiB | 00m00s [147/151] Installing rpm-build-0:4.20.1 100% | 8.7 MiB/s | 177.4 KiB | 00m00s [148/151] Installing pyproject-srpm-mac 100% | 1.2 MiB/s | 2.5 KiB | 00m00s [149/151] Installing util-linux-0:2.40. 100% | 69.3 MiB/s | 3.5 MiB | 00m00s [150/151] Installing which-0:2.23-2.fc4 100% | 4.9 MiB/s | 85.7 KiB | 00m00s [151/151] Installing info-0:7.2-3.fc42. 100% | 119.1 KiB/s | 358.3 KiB | 00m03s Complete! Updating and loading repositories: Additional repo https_developer_downlo 100% | 163.9 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 75.8 KiB/s | 1.8 KiB | 00m00s fedora 100% | 150.9 KiB/s | 30.3 KiB | 00m00s updates 100% | 216.1 KiB/s | 29.0 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: cuda x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B Installing dependencies: OpenCL-ICD-Loader x86_64 3.0.6-2.20241023git5907ac1.fc42 fedora 70.7 KiB abattis-cantarell-vf-fonts noarch 0.301-14.fc42 fedora 192.7 KiB adwaita-cursor-theme noarch 48.1-1.fc42 updates 11.4 MiB adwaita-icon-theme noarch 48.1-1.fc42 updates 1.2 MiB adwaita-icon-theme-legacy noarch 46.2-3.fc42 fedora 2.1 MiB alsa-lib x86_64 1.2.14-3.fc42 updates 1.4 MiB annobin-docs noarch 12.94-1.fc42 updates 98.9 KiB annobin-plugin-gcc x86_64 12.94-1.fc42 updates 993.5 KiB at-spi2-atk x86_64 2.56.5-1.fc42 updates 275.6 KiB at-spi2-core x86_64 2.56.5-1.fc42 updates 1.5 MiB atk x86_64 2.56.5-1.fc42 updates 248.6 KiB authselect x86_64 1.5.1-1.fc42 fedora 153.9 KiB authselect-libs x86_64 1.5.1-1.fc42 fedora 825.0 KiB avahi-glib x86_64 0.9~rc2-2.fc42 fedora 23.6 KiB avahi-libs x86_64 0.9~rc2-2.fc42 fedora 183.6 KiB cairo x86_64 1.18.2-3.fc42 fedora 1.8 MiB cairo-gobject x86_64 1.18.2-3.fc42 fedora 35.1 KiB cmake-filesystem x86_64 3.31.6-2.fc42 fedora 0.0 B colord-libs x86_64 1.4.7-6.fc42 fedora 850.7 KiB cpp x86_64 15.2.1-1.fc42 updates 37.9 MiB cracklib x86_64 2.9.11-7.fc42 fedora 242.4 KiB cuda-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B cuda-cccl-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 13.2 MiB cuda-command-line-tools-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B cuda-compiler-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B cuda-crt-13-0 x86_64 13.0.88-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 936.8 KiB cuda-cudart-13-0 x86_64 13.0.96-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 754.1 KiB cuda-cudart-devel-13-0 x86_64 13.0.96-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 6.2 MiB cuda-culibos-devel-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 96.4 KiB cuda-cuobjdump-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 750.4 KiB cuda-cupti-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 146.2 MiB cuda-cuxxfilt-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 1.0 MiB cuda-documentation-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 538.3 KiB cuda-driver-devel-13-0 x86_64 13.0.96-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 135.3 KiB cuda-gdb-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 92.0 MiB cuda-libraries-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B cuda-libraries-devel-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B cuda-nsight-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 113.2 MiB cuda-nsight-compute-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 5.3 KiB cuda-nsight-systems-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 1.7 KiB cuda-nvcc-13-0 x86_64 13.0.88-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 111.0 MiB cuda-nvdisasm-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 4.8 MiB cuda-nvml-devel-13-0 x86_64 13.0.87-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 1.4 MiB cuda-nvprune-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 181.3 KiB cuda-nvrtc-13-0 x86_64 13.0.88-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 217.4 MiB cuda-nvrtc-devel-13-0 x86_64 13.0.88-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 244.5 MiB cuda-nvtx-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 624.7 KiB cuda-opencl-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 96.5 KiB cuda-opencl-devel-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 747.9 KiB cuda-profiler-api-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 77.6 KiB cuda-runtime-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B cuda-sandbox-devel-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 149.4 KiB cuda-sanitizer-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 41.3 MiB cuda-toolkit-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 3.6 KiB cuda-toolkit-13-0-config-common noarch 13.0.96-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B cuda-toolkit-13-config-common noarch 13.0.96-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 44.0 B cuda-toolkit-config-common noarch 13.0.96-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 41.0 B cuda-tools-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B cuda-visual-tools-13-0 x86_64 13.0.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B cups-filesystem noarch 1:2.4.14-2.fc42 updates 0.0 B cups-libs x86_64 1:2.4.14-2.fc42 updates 618.7 KiB dbus x86_64 1:1.16.0-3.fc42 fedora 0.0 B dbus-broker x86_64 36-6.fc42 updates 387.1 KiB dbus-common noarch 1:1.16.0-3.fc42 fedora 11.2 KiB dbus-libs x86_64 1:1.16.0-3.fc42 fedora 349.5 KiB default-fonts-core-sans noarch 4.2-4.fc42 fedora 11.9 KiB dkms noarch 3.2.2-1.fc42 updates 209.7 KiB egl-gbm x86_64 2:1.1.2.1-1.fc42 updates 29.3 KiB egl-wayland x86_64 1.1.20-2.fc42 updates 83.4 KiB egl-x11 x86_64 1.0.3-1.fc42 updates 165.8 KiB elfutils-libelf-devel x86_64 0.193-2.fc42 updates 50.0 KiB expat x86_64 2.7.2-1.fc42 updates 298.6 KiB fontconfig x86_64 2.16.0-2.fc42 fedora 764.7 KiB fonts-filesystem noarch 1:2.0.5-22.fc42 updates 0.0 B freetype x86_64 2.13.3-2.fc42 fedora 858.2 KiB fribidi x86_64 1.0.16-2.fc42 fedora 194.3 KiB gcc x86_64 15.2.1-1.fc42 updates 111.2 MiB gcc-c++ x86_64 15.2.1-1.fc42 updates 41.3 MiB gcc-plugin-annobin x86_64 15.2.1-1.fc42 updates 57.1 KiB gdbm x86_64 1:1.23-9.fc42 fedora 460.3 KiB gdk-pixbuf2 x86_64 2.42.12-12.fc42 updates 2.5 MiB gdk-pixbuf2-modules x86_64 2.42.12-12.fc42 updates 55.4 KiB gds-tools-13-0 x86_64 1.15.1.6-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 60.0 MiB glib2 x86_64 2.84.4-1.fc42 updates 14.7 MiB glibc-devel x86_64 2.41-11.fc42 updates 2.3 MiB gnutls x86_64 3.8.10-1.fc42 updates 3.8 MiB google-noto-fonts-common noarch 20250301-1.fc42 fedora 17.7 KiB google-noto-sans-vf-fonts noarch 20250301-1.fc42 fedora 1.4 MiB graphite2 x86_64 1.3.14-18.fc42 fedora 195.8 KiB gtk-update-icon-cache x86_64 3.24.49-2.fc42 fedora 62.2 KiB gtk3 x86_64 3.24.49-2.fc42 fedora 23.1 MiB harfbuzz x86_64 10.4.0-1.fc42 fedora 2.7 MiB hicolor-icon-theme noarch 0.17-20.fc42 fedora 72.2 KiB hwdata noarch 0.400-1.fc42 updates 9.6 MiB java-21-openjdk x86_64 1:21.0.8.0.9-1.fc42 updates 1.0 MiB java-21-openjdk-headless x86_64 1:21.0.8.0.9-1.fc42 updates 197.8 MiB javapackages-filesystem noarch 6.4.0-5.fc42 fedora 2.0 KiB jbigkit-libs x86_64 2.1-31.fc42 fedora 121.4 KiB json-glib x86_64 1.10.8-1.fc42 updates 592.3 KiB kernel-headers x86_64 6.16.2-200.fc42 updates 6.7 MiB kmod x86_64 33-3.fc42 fedora 235.4 KiB kmod-nvidia-open-dkms noarch 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 119.3 MiB lcms2 x86_64 2.16-5.fc42 fedora 437.7 KiB libICE x86_64 1.1.2-2.fc42 fedora 198.4 KiB libSM x86_64 1.2.5-2.fc42 fedora 105.0 KiB libX11 x86_64 1.8.12-1.fc42 updates 1.3 MiB libX11-common noarch 1.8.12-1.fc42 updates 1.2 MiB libX11-xcb x86_64 1.8.12-1.fc42 updates 10.9 KiB libXau x86_64 1.0.12-2.fc42 fedora 76.9 KiB libXcomposite x86_64 0.4.6-5.fc42 fedora 44.4 KiB libXcursor x86_64 1.2.3-2.fc42 fedora 57.4 KiB libXdamage x86_64 1.1.6-5.fc42 fedora 43.6 KiB libXext x86_64 1.3.6-3.fc42 fedora 90.0 KiB libXfixes x86_64 6.0.1-5.fc42 fedora 30.2 KiB libXft x86_64 2.3.8-8.fc42 fedora 168.4 KiB libXi x86_64 1.8.2-2.fc42 fedora 84.6 KiB libXinerama x86_64 1.1.5-8.fc42 fedora 19.0 KiB libXrandr x86_64 1.5.4-5.fc42 fedora 55.8 KiB libXrender x86_64 0.9.12-2.fc42 fedora 50.0 KiB libXtst x86_64 1.2.5-2.fc42 fedora 33.5 KiB libXxf86vm x86_64 1.1.6-2.fc42 fedora 29.2 KiB libb2 x86_64 0.98.1-13.fc42 fedora 46.1 KiB libcloudproviders x86_64 0.3.6-1.fc42 fedora 124.3 KiB libcublas-13-0 x86_64 13.1.0.3-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 568.9 MiB libcublas-devel-13-0 x86_64 13.1.0.3-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 907.6 MiB libcufft-13-0 x86_64 12.0.0.61-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 274.3 MiB libcufft-devel-13-0 x86_64 12.0.0.61-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 280.5 MiB libcufile-13-0 x86_64 1.15.1.6-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 3.2 MiB libcufile-devel-13-0 x86_64 1.15.1.6-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 27.9 MiB libcurand-13-0 x86_64 10.4.0.35-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 126.6 MiB libcurand-devel-13-0 x86_64 10.4.0.35-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 129.0 MiB libcusolver-13-0 x86_64 12.0.4.66-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 233.8 MiB libcusolver-devel-13-0 x86_64 12.0.4.66-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 180.9 MiB libcusparse-13-0 x86_64 12.6.3.3-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 155.1 MiB libcusparse-devel-13-0 x86_64 12.6.3.3-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 348.7 MiB libdatrie x86_64 0.2.13-11.fc42 fedora 57.8 KiB libdrm x86_64 2.4.126-1.fc42 updates 399.9 KiB libedit x86_64 3.1-55.20250104cvs.fc42 fedora 244.1 KiB libepoxy x86_64 1.5.10-9.fc42 fedora 1.1 MiB libfontenc x86_64 1.1.8-3.fc42 fedora 70.9 KiB libglvnd x86_64 1:1.7.0-7.fc42 fedora 530.2 KiB libglvnd-egl x86_64 1:1.7.0-7.fc42 fedora 68.7 KiB libglvnd-gles x86_64 1:1.7.0-7.fc42 fedora 105.9 KiB libglvnd-glx x86_64 1:1.7.0-7.fc42 fedora 609.2 KiB libglvnd-opengl x86_64 1:1.7.0-7.fc42 fedora 148.8 KiB libgusb x86_64 0.4.9-3.fc42 fedora 162.0 KiB libicu x86_64 76.1-4.fc42 fedora 36.3 MiB libjpeg-turbo x86_64 3.1.2-1.fc42 updates 804.9 KiB liblerc x86_64 4.0.0-8.fc42 fedora 636.1 KiB libmpc x86_64 1.3.1-7.fc42 fedora 164.5 KiB libnpp-13-0 x86_64 13.0.1.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 157.3 MiB libnpp-devel-13-0 x86_64 13.0.1.2-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 184.5 MiB libnsl2 x86_64 2.0.1-3.fc42 fedora 57.9 KiB libnvfatbin-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 2.4 MiB libnvfatbin-devel-13-0 x86_64 13.0.85-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 2.3 MiB libnvidia-cfg x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 386.1 KiB libnvidia-fbc x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 240.0 KiB libnvidia-gpucomp x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 68.9 MiB libnvidia-ml x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 2.2 MiB libnvjitlink-13-0 x86_64 13.0.88-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 94.3 MiB libnvjitlink-devel-13-0 x86_64 13.0.88-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 130.0 MiB libnvjpeg-13-0 x86_64 13.0.1.86-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 5.7 MiB libnvjpeg-devel-13-0 x86_64 13.0.1.86-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 6.4 MiB libnvptxcompiler-13-0 x86_64 13.0.88-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 85.4 MiB libnvvm-13-0 x86_64 13.0.88-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 133.6 MiB libpciaccess x86_64 0.16-15.fc42 fedora 44.5 KiB libpng x86_64 2:1.6.44-2.fc42 fedora 241.7 KiB libpwquality x86_64 1.4.5-12.fc42 fedora 409.3 KiB libseccomp x86_64 2.5.5-2.fc41 fedora 173.3 KiB libsoup3 x86_64 3.6.5-6.fc42 updates 1.1 MiB libstdc++-devel x86_64 15.2.1-1.fc42 updates 16.1 MiB libthai x86_64 0.1.29-10.fc42 fedora 783.4 KiB libtiff x86_64 4.7.0-8.fc42 updates 619.1 KiB libtinysparql x86_64 3.9.2-1.fc42 updates 1.3 MiB libtirpc x86_64 1.3.7-0.fc42 updates 198.9 KiB libusb1 x86_64 1.0.29-4.fc42 updates 171.3 KiB libvdpau x86_64 1.5-9.fc42 fedora 20.7 KiB libwayland-client x86_64 1.24.0-1.fc42 updates 62.0 KiB libwayland-cursor x86_64 1.24.0-1.fc42 updates 37.3 KiB libwayland-egl x86_64 1.24.0-1.fc42 updates 12.4 KiB libwayland-server x86_64 1.24.0-1.fc42 updates 78.5 KiB libwebp x86_64 1.5.0-2.fc42 fedora 947.6 KiB libxcb x86_64 1.17.0-5.fc42 fedora 1.1 MiB libxcrypt-devel x86_64 4.4.38-7.fc42 updates 30.8 KiB libxkbcommon x86_64 1.8.1-1.fc42 fedora 367.4 KiB libxkbcommon-x11 x86_64 1.8.1-1.fc42 fedora 35.5 KiB libxshmfence x86_64 1.3.2-6.fc42 fedora 12.4 KiB libzstd-devel x86_64 1.5.7-1.fc42 fedora 208.0 KiB lksctp-tools x86_64 1.0.21-1.fc42 updates 251.0 KiB llvm-filesystem x86_64 20.1.8-4.fc42 updates 0.0 B llvm-libs x86_64 20.1.8-4.fc42 updates 137.1 MiB lm_sensors-libs x86_64 3.6.0-22.fc42 fedora 85.8 KiB make x86_64 1:4.4.1-10.fc42 fedora 1.8 MiB mesa-dri-drivers x86_64 25.1.9-1.fc42 updates 46.7 MiB mesa-filesystem x86_64 25.1.9-1.fc42 updates 3.6 KiB mesa-libEGL x86_64 25.1.9-1.fc42 updates 334.9 KiB mesa-libGL x86_64 25.1.9-1.fc42 updates 306.2 KiB mesa-libgbm x86_64 25.1.9-1.fc42 updates 19.7 KiB mkfontscale x86_64 1.2.3-2.fc42 fedora 45.0 KiB mpdecimal x86_64 4.0.1-1.fc42 updates 217.2 KiB nettle x86_64 3.10.1-1.fc42 fedora 790.5 KiB nsight-compute-2025.3.1 x86_64 2025.3.1.4-1 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 1.2 GiB nsight-systems-2025.3.2 x86_64 2025.3.2.474_253236389321v0-0 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 1.0 GiB nspr x86_64 4.37.0-3.fc42 updates 315.5 KiB nss x86_64 3.116.0-1.fc42 updates 1.9 MiB nss-softokn x86_64 3.116.0-1.fc42 updates 1.9 MiB nss-softokn-freebl x86_64 3.116.0-1.fc42 updates 848.3 KiB nss-sysinit x86_64 3.116.0-1.fc42 updates 18.1 KiB nss-util x86_64 3.116.0-1.fc42 updates 200.8 KiB numactl-libs x86_64 2.0.19-2.fc42 fedora 52.9 KiB nvidia-driver x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 18.1 MiB nvidia-driver-cuda x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 1.4 MiB nvidia-driver-cuda-libs x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 343.5 MiB nvidia-driver-libs x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 431.7 MiB nvidia-kmod-common noarch 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 100.4 MiB nvidia-libXNVCtrl x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 44.1 KiB nvidia-modprobe x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 54.8 KiB nvidia-open noarch 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 0.0 B nvidia-persistenced x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 58.1 KiB nvidia-settings x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch 1.7 MiB opencl-filesystem noarch 1.0-22.fc42 fedora 0.0 B openssl x86_64 1:3.2.6-2.fc42 updates 1.7 MiB pam x86_64 1.7.0-6.fc42 updates 1.6 MiB pango x86_64 1.56.4-2.fc42 updates 1.0 MiB pixman x86_64 0.46.2-1.fc42 updates 710.3 KiB python-pip-wheel noarch 24.3.1-5.fc42 updates 1.2 MiB python3 x86_64 3.13.7-1.fc42 updates 28.7 KiB python3-libs x86_64 3.13.7-1.fc42 updates 40.1 MiB shared-mime-info x86_64 2.3-7.fc42 fedora 5.2 MiB spirv-tools-libs x86_64 2025.2-2.fc42 updates 5.8 MiB systemd x86_64 257.9-2.fc42 updates 12.1 MiB systemd-pam x86_64 257.9-2.fc42 updates 1.1 MiB systemd-rpm-macros noarch 257.9-2.fc42 updates 10.7 KiB systemd-shared x86_64 257.9-2.fc42 updates 4.6 MiB ttmkfdir x86_64 3.0.9-72.fc42 fedora 118.5 KiB tzdata noarch 2025b-1.fc42 fedora 1.6 MiB tzdata-java noarch 2025b-1.fc42 fedora 100.1 KiB vulkan-loader x86_64 1.4.313.0-1.fc42 updates 532.4 KiB xcb-util x86_64 0.4.1-7.fc42 fedora 26.3 KiB xcb-util-image x86_64 0.4.1-7.fc42 fedora 22.2 KiB xcb-util-keysyms x86_64 0.4.1-7.fc42 fedora 16.7 KiB xcb-util-renderutil x86_64 0.3.10-7.fc42 fedora 24.4 KiB xcb-util-wm x86_64 0.4.2-7.fc42 fedora 81.2 KiB xkeyboard-config noarch 2.44-1.fc42 fedora 6.6 MiB xml-common noarch 0.6.3-66.fc42 fedora 78.4 KiB xorg-x11-fonts-Type1 noarch 7.5-40.fc42 fedora 863.3 KiB xprop x86_64 1.2.8-3.fc42 fedora 54.7 KiB zlib-ng-compat-devel x86_64 2.2.5-2.fc42 updates 107.0 KiB Transaction Summary: Installing: 249 packages Total size of inbound packages is 4 GiB. Need to download 0 B. After this operation, 9 GiB extra will be used (install 9 GiB, remove 0 B). [ 1/249] cuda-0:13.0.2-1.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 2/249] cuda-13-0-0:13.0.2-1.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 3/249] cuda-runtime-13-0-0:13.0.2-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 4/249] cuda-toolkit-13-0-0:13.0.2-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 5/249] cuda-libraries-13-0-0:13.0.2- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 6/249] cuda-compiler-13-0-0:13.0.2-1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 7/249] cuda-documentation-13-0-0:13. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 8/249] cuda-libraries-devel-13-0-0:1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 9/249] cuda-nvml-devel-13-0-0:13.0.8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 10/249] cuda-tools-13-0-0:13.0.2-1.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 11/249] cuda-cudart-13-0-0:13.0.96-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 12/249] cuda-nvrtc-13-0-0:13.0.88-1.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 13/249] cuda-opencl-13-0-0:13.0.85-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 14/249] libcublas-13-0-0:13.1.0.3-1.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 15/249] libcufft-13-0-0:12.0.0.61-1.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 16/249] libcufile-13-0-0:1.15.1.6-1.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 17/249] libcurand-13-0-0:10.4.0.35-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 18/249] libcusolver-13-0-0:12.0.4.66- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 19/249] libcusparse-13-0-0:12.6.3.3-1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 20/249] libnpp-13-0-0:13.0.1.2-1.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 21/249] libnvfatbin-13-0-0:13.0.85-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 22/249] libnvjitlink-13-0-0:13.0.88-1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 23/249] libnvjpeg-13-0-0:13.0.1.86-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 24/249] cuda-crt-13-0-0:13.0.88-1.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 25/249] cuda-cuobjdump-13-0-0:13.0.85 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 26/249] cuda-cuxxfilt-13-0-0:13.0.85- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 27/249] cuda-nvcc-13-0-0:13.0.88-1.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 28/249] cuda-nvprune-13-0-0:13.0.85-1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 29/249] libnvptxcompiler-13-0-0:13.0. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 30/249] libnvvm-13-0-0:13.0.88-1.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 31/249] cuda-cccl-13-0-0:13.0.85-1.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 32/249] cuda-cudart-devel-13-0-0:13.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 33/249] cuda-culibos-devel-13-0-0:13. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 34/249] cuda-driver-devel-13-0-0:13.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 35/249] cuda-nvrtc-devel-13-0-0:13.0. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 36/249] cuda-opencl-devel-13-0-0:13.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 37/249] cuda-profiler-api-13-0-0:13.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 38/249] cuda-sandbox-devel-13-0-0:13. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 39/249] libcublas-devel-13-0-0:13.1.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 40/249] libcufft-devel-13-0-0:12.0.0. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 41/249] libcufile-devel-13-0-0:1.15.1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 42/249] libcurand-devel-13-0-0:10.4.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 43/249] libcusolver-devel-13-0-0:12.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 44/249] libcusparse-devel-13-0-0:12.6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 45/249] libnpp-devel-13-0-0:13.0.1.2- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 46/249] libnvfatbin-devel-13-0-0:13.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 47/249] libnvjitlink-devel-13-0-0:13. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 48/249] libnvjpeg-devel-13-0-0:13.0.1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 49/249] cuda-command-line-tools-13-0- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 50/249] cuda-visual-tools-13-0-0:13.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 51/249] gds-tools-13-0-0:1.15.1.6-1.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 52/249] cuda-cupti-13-0-0:13.0.85-1.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 53/249] cuda-gdb-13-0-0:13.0.85-1.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 54/249] cuda-nvdisasm-13-0-0:13.0.85- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 55/249] cuda-nvtx-13-0-0:13.0.85-1.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 56/249] cuda-sanitizer-13-0-0:13.0.85 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 57/249] cuda-nsight-13-0-0:13.0.85-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 58/249] cuda-nsight-compute-13-0-0:13 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 59/249] cuda-nsight-systems-13-0-0:13 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 60/249] numactl-libs-0:2.0.19-2.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 61/249] nsight-compute-2025.3.1-0:202 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 62/249] nsight-systems-2025.3.2-0:202 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 63/249] dbus-libs-1:1.16.0-3.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 64/249] fontconfig-0:2.16.0-2.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 65/249] libSM-0:1.2.5-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 66/249] libXcomposite-0:0.4.6-5.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 67/249] libXdamage-0:1.1.6-5.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 68/249] libXext-0:1.3.6-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 69/249] libXrandr-0:1.5.4-5.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 70/249] libXtst-0:1.2.5-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 71/249] libglvnd-egl-1:1.7.0-7.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 72/249] libglvnd-opengl-1:1.7.0-7.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 73/249] libxcb-0:1.17.0-5.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 74/249] libxkbcommon-0:1.8.1-1.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 75/249] libxkbcommon-x11-0:1.8.1-1.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 76/249] xcb-util-image-0:0.4.1-7.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 77/249] xcb-util-keysyms-0:0.4.1-7.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 78/249] xcb-util-renderutil-0:0.3.10- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 79/249] xcb-util-wm-0:0.4.2-7.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 80/249] default-fonts-core-sans-0:4.2 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 81/249] freetype-0:2.13.3-2.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 82/249] xml-common-0:0.6.3-66.fc42.no 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 83/249] libICE-0:1.1.2-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 84/249] libXfixes-0:6.0.1-5.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 85/249] libXrender-0:0.9.12-2.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 86/249] libXi-0:1.8.2-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 87/249] libglvnd-1:1.7.0-7.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 88/249] libXau-0:1.0.12-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 89/249] xkeyboard-config-0:2.44-1.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 90/249] xcb-util-0:0.4.1-7.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 91/249] abattis-cantarell-vf-fonts-0: 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 92/249] google-noto-sans-vf-fonts-0:2 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 93/249] harfbuzz-0:10.4.0-1.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 94/249] libpng-2:1.6.44-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 95/249] google-noto-fonts-common-0:20 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 96/249] graphite2-0:1.3.14-18.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 97/249] mesa-libEGL-0:25.1.9-1.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 98/249] mesa-dri-drivers-0:25.1.9-1.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 99/249] mesa-libgbm-0:25.1.9-1.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [100/249] libxshmfence-0:1.3.2-6.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [101/249] lm_sensors-libs-0:3.6.0-22.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [102/249] mesa-filesystem-0:25.1.9-1.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [103/249] libX11-0:1.8.12-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [104/249] libX11-common-0:1.8.12-1.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [105/249] glib2-0:2.84.4-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [106/249] fonts-filesystem-1:2.0.5-22.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [107/249] alsa-lib-0:1.2.14-3.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [108/249] libX11-xcb-0:1.8.12-1.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [109/249] nss-0:3.116.0-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [110/249] nss-softokn-0:3.116.0-1.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [111/249] nss-softokn-freebl-0:3.116.0- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [112/249] nss-sysinit-0:3.116.0-1.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [113/249] cuda-toolkit-13-0-config-comm 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [114/249] cuda-toolkit-13-config-common 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [115/249] cuda-toolkit-config-common-0: 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [116/249] openssl-1:3.2.6-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [117/249] nvidia-driver-cuda-3:580.95.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [118/249] nvidia-driver-cuda-libs-3:580 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [119/249] nvidia-kmod-common-3:580.95.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [120/249] nvidia-persistenced-3:580.95. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [121/249] opencl-filesystem-0:1.0-22.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [122/249] libnvidia-cfg-3:580.95.05-1.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [123/249] libnvidia-gpucomp-3:580.95.05 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [124/249] libnvidia-ml-3:580.95.05-1.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [125/249] nvidia-modprobe-3:580.95.05-1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [126/249] nvidia-open-3:580.95.05-1.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [127/249] kmod-nvidia-open-dkms-3:580.9 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [128/249] nvidia-driver-3:580.95.05-1.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [129/249] nvidia-libXNVCtrl-3:580.95.05 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [130/249] nvidia-settings-3:580.95.05-1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [131/249] dkms-0:3.2.2-1.fc42.noarch 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [132/249] nvidia-driver-libs-3:580.95.0 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [133/249] cairo-0:1.18.2-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [134/249] gtk3-0:3.24.49-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [135/249] libXxf86vm-0:1.1.6-2.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [136/249] libvdpau-0:1.5-9.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [137/249] kmod-0:33-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [138/249] make-1:4.4.1-10.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [139/249] libglvnd-gles-1:1.7.0-7.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [140/249] libglvnd-glx-1:1.7.0-7.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [141/249] cairo-gobject-0:1.18.2-3.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [142/249] colord-libs-0:1.4.7-6.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [143/249] fribidi-0:1.0.16-2.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [144/249] gtk-update-icon-cache-0:3.24. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [145/249] hicolor-icon-theme-0:0.17-20. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [146/249] libXcursor-0:1.2.3-2.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [147/249] libXinerama-0:1.1.5-8.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [148/249] libcloudproviders-0:0.3.6-1.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [149/249] libepoxy-0:1.5.10-9.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [150/249] mesa-libGL-0:25.1.9-1.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [151/249] lcms2-0:2.16-5.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [152/249] libgusb-0:0.4.9-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [153/249] gcc-c++-0:15.2.1-1.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [154/249] libmpc-0:1.3.1-7.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [155/249] gcc-0:15.2.1-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [156/249] cpp-0:15.2.1-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [157/249] java-21-openjdk-1:21.0.8.0.9- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [158/249] xorg-x11-fonts-Type1-0:7.5-40 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [159/249] java-21-openjdk-headless-1:21 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [160/249] mkfontscale-0:1.2.3-2.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [161/249] ttmkfdir-0:3.0.9-72.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [162/249] javapackages-filesystem-0:6.4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [163/249] tzdata-java-0:2025b-1.fc42.no 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [164/249] libfontenc-0:1.1.8-3.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [165/249] nspr-0:4.37.0-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [166/249] nss-util-0:3.116.0-1.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [167/249] expat-0:2.7.2-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [168/249] libdrm-0:2.4.126-1.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [169/249] libpciaccess-0:0.16-15.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [170/249] libwayland-client-0:1.24.0-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [171/249] libwayland-cursor-0:1.24.0-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [172/249] libwayland-server-0:1.24.0-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [173/249] llvm-libs-0:20.1.8-4.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [174/249] libedit-0:3.1-55.20250104cvs. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [175/249] llvm-filesystem-0:20.1.8-4.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [176/249] spirv-tools-libs-0:2025.2-2.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [177/249] cups-libs-1:2.4.14-2.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [178/249] avahi-libs-0:0.9~rc2-2.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [179/249] cups-filesystem-1:2.4.14-2.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [180/249] lksctp-tools-0:1.0.21-1.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [181/249] gnutls-0:3.8.10-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [182/249] nettle-0:3.10.1-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [183/249] libstdc++-devel-0:15.2.1-1.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [184/249] glibc-devel-0:2.41-11.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [185/249] elfutils-libelf-devel-0:0.193 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [186/249] hwdata-0:0.400-1.fc42.noarch 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [187/249] json-glib-0:1.10.8-1.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [188/249] libusb1-0:1.0.29-4.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [189/249] adwaita-icon-theme-0:48.1-1.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [190/249] adwaita-icon-theme-legacy-0:4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [191/249] adwaita-cursor-theme-0:48.1-1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [192/249] at-spi2-atk-0:2.56.5-1.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [193/249] at-spi2-core-0:2.56.5-1.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [194/249] atk-0:2.56.5-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [195/249] dbus-1:1.16.0-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [196/249] xprop-0:1.2.8-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [197/249] gdk-pixbuf2-0:2.42.12-12.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [198/249] shared-mime-info-0:2.3-7.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [199/249] gdk-pixbuf2-modules-0:2.42.12 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [200/249] libtinysparql-0:3.9.2-1.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [201/249] avahi-glib-0:0.9~rc2-2.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [202/249] libicu-0:76.1-4.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [203/249] libwayland-egl-0:1.24.0-1.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [204/249] pango-0:1.56.4-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [205/249] libXft-0:2.3.8-8.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [206/249] libthai-0:0.1.29-10.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [207/249] libdatrie-0:0.2.13-11.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [208/249] pixman-0:0.46.2-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [209/249] libtirpc-0:1.3.7-0.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [210/249] systemd-0:257.9-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [211/249] libseccomp-0:2.5.5-2.fc41.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [212/249] systemd-pam-0:257.9-2.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [213/249] systemd-shared-0:257.9-2.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [214/249] libnvidia-fbc-3:580.95.05-1.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [215/249] egl-gbm-2:1.1.2.1-1.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [216/249] egl-wayland-0:1.1.20-2.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [217/249] egl-x11-0:1.0.3-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [218/249] vulkan-loader-0:1.4.313.0-1.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [219/249] python3-0:3.13.7-1.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [220/249] python3-libs-0:3.13.7-1.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [221/249] libb2-0:0.98.1-13.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [222/249] tzdata-0:2025b-1.fc42.noarch 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [223/249] dbus-broker-0:36-6.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [224/249] dbus-common-1:1.16.0-3.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [225/249] mpdecimal-0:4.0.1-1.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [226/249] python-pip-wheel-0:24.3.1-5.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [227/249] libsoup3-0:3.6.5-6.fc42.x86_6 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [228/249] kernel-headers-0:6.16.2-200.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [229/249] libxcrypt-devel-0:4.4.38-7.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [230/249] libtiff-0:4.7.0-8.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [231/249] jbigkit-libs-0:2.1-31.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [232/249] liblerc-0:4.0.0-8.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [233/249] libwebp-0:1.5.0-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [234/249] libjpeg-turbo-0:3.1.2-1.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [235/249] libzstd-devel-0:1.5.7-1.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [236/249] zlib-ng-compat-devel-0:2.2.5- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [237/249] cmake-filesystem-0:3.31.6-2.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [238/249] OpenCL-ICD-Loader-0:3.0.6-2.2 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [239/249] gcc-plugin-annobin-0:15.2.1-1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [240/249] systemd-rpm-macros-0:257.9-2. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [241/249] authselect-libs-0:1.5.1-1.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [242/249] pam-0:1.7.0-6.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [243/249] authselect-0:1.5.1-1.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [244/249] libnsl2-0:2.0.1-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [245/249] libpwquality-0:1.4.5-12.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [246/249] cracklib-0:2.9.11-7.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [247/249] gdbm-1:1.23-9.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [248/249] annobin-plugin-gcc-0:12.94-1. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [249/249] annobin-docs-0:12.94-1.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded -------------------------------------------------------------------------------- [249/249] Total 100% | 0.0 B/s | 0.0 B | 02m05s Running transaction [ 1/251] Verify package files 100% | 12.0 B/s | 249.0 B | 00m21s >>> Running %pretrans scriptlet: java-21-openjdk-headless-1:21.0.8.0.9-1.fc42.x8 >>> Finished %pretrans scriptlet: java-21-openjdk-headless-1:21.0.8.0.9-1.fc42.x >>> [RPM] /var/lib/mock/fedora-42-x86_64-1760489761.899953/root/var/cache/dnf/ht [ 2/251] Prepare transaction 100% | 1.6 KiB/s | 249.0 B | 00m00s [ 3/251] Installing cuda-toolkit-confi 100% | 33.9 KiB/s | 312.0 B | 00m00s [ 4/251] Installing cuda-toolkit-13-co 100% | 0.0 B/s | 316.0 B | 00m00s [ 5/251] Installing cuda-toolkit-13-0- 100% | 121.1 KiB/s | 124.0 B | 00m00s [ 6/251] Installing cuda-culibos-devel 100% | 94.7 MiB/s | 97.0 KiB | 00m00s [ 7/251] Installing expat-0:2.7.2-1.fc 100% | 19.6 MiB/s | 300.7 KiB | 00m00s [ 8/251] Installing libwayland-client- 100% | 61.7 MiB/s | 63.2 KiB | 00m00s [ 9/251] Installing nspr-0:4.37.0-3.fc 100% | 103.2 MiB/s | 317.2 KiB | 00m00s [ 10/251] Installing libX11-xcb-0:1.8.1 100% | 0.0 B/s | 11.8 KiB | 00m00s [ 11/251] Installing libglvnd-1:1.7.0-7 100% | 259.6 MiB/s | 531.6 KiB | 00m00s [ 12/251] Installing nss-util-0:3.116.0 100% | 197.0 MiB/s | 201.8 KiB | 00m00s [ 13/251] Installing libnvidia-ml-3:580 100% | 363.0 MiB/s | 2.2 MiB | 00m00s [ 14/251] Installing dbus-libs-1:1.16.0 100% | 171.2 MiB/s | 350.6 KiB | 00m00s [ 15/251] Installing avahi-libs-0:0.9~r 100% | 181.8 MiB/s | 186.2 KiB | 00m00s [ 16/251] Installing libtirpc-0:1.3.7-0 100% | 196.0 MiB/s | 200.7 KiB | 00m00s [ 17/251] Installing libwayland-server- 100% | 77.8 MiB/s | 79.7 KiB | 00m00s [ 18/251] Installing libmpc-0:1.3.1-7.f 100% | 162.2 MiB/s | 166.1 KiB | 00m00s [ 19/251] Installing libnvidia-gpucomp- 100% | 403.0 MiB/s | 68.9 MiB | 00m00s [ 20/251] Installing fonts-filesystem-1 100% | 0.0 B/s | 788.0 B | 00m00s [ 21/251] Installing libpng-2:1.6.44-2. 100% | 39.5 MiB/s | 242.9 KiB | 00m00s [ 22/251] Installing libglvnd-opengl-1: 100% | 146.1 MiB/s | 149.6 KiB | 00m00s [ 23/251] Installing cuda-cudart-13-0-0 100% | 6.8 MiB/s | 755.6 KiB | 00m00s [ 24/251] Installing cuda-opencl-13-0-0 100% | 16.0 MiB/s | 98.1 KiB | 00m00s [ 25/251] Installing libcublas-13-0-0:1 100% | 143.3 MiB/s | 568.9 MiB | 00m04s [ 26/251] Installing libcufft-13-0-0:12 100% | 124.2 MiB/s | 274.3 MiB | 00m02s [ 27/251] Installing libcufile-13-0-0:1 100% | 12.8 MiB/s | 3.2 MiB | 00m00s [ 28/251] Installing libcurand-13-0-0:1 100% | 199.7 MiB/s | 126.6 MiB | 00m01s [ 29/251] Installing libcusolver-13-0-0 100% | 130.2 MiB/s | 233.8 MiB | 00m02s [ 30/251] Installing libcusparse-13-0-0 100% | 108.2 MiB/s | 155.1 MiB | 00m01s [ 31/251] Installing libnpp-13-0-0:13.0 100% | 116.2 MiB/s | 157.4 MiB | 00m01s [ 32/251] Installing libnvfatbin-13-0-0 100% | 142.3 MiB/s | 2.4 MiB | 00m00s [ 33/251] Installing libnvjitlink-13-0- 100% | 129.5 MiB/s | 94.3 MiB | 00m01s [ 34/251] Installing libnvjpeg-13-0-0:1 100% | 188.8 MiB/s | 5.7 MiB | 00m00s [ 35/251] Installing libjpeg-turbo-0:3. 100% | 262.6 MiB/s | 806.6 KiB | 00m00s [ 36/251] Installing libseccomp-0:2.5.5 100% | 171.1 MiB/s | 175.2 KiB | 00m00s [ 37/251] Installing fribidi-0:1.0.16-2 100% | 12.0 MiB/s | 196.8 KiB | 00m00s [ 38/251] Installing make-1:4.4.1-10.fc 100% | 37.5 MiB/s | 1.8 MiB | 00m00s [ 39/251] Installing libnvidia-cfg-3:58 100% | 377.5 MiB/s | 386.5 KiB | 00m00s [ 40/251] Installing nvidia-driver-cuda 100% | 127.6 MiB/s | 343.5 MiB | 00m03s [ 41/251] Installing alsa-lib-0:1.2.14- 100% | 35.2 MiB/s | 1.4 MiB | 00m00s [ 42/251] Installing cuda-driver-devel- 100% | 33.5 MiB/s | 137.0 KiB | 00m00s [ 43/251] Installing cuda-cccl-13-0-0:1 100% | 49.6 MiB/s | 13.6 MiB | 00m00s [ 44/251] Installing libnvvm-13-0-0:13. 100% | 194.7 MiB/s | 133.6 MiB | 00m01s [ 45/251] Installing libnvptxcompiler-1 100% | 89.0 MiB/s | 85.4 MiB | 00m01s [ 46/251] Installing cuda-crt-13-0-0:13 100% | 19.6 MiB/s | 942.2 KiB | 00m00s [ 47/251] Installing cuda-nvrtc-13-0-0: 100% | 79.3 MiB/s | 217.4 MiB | 00m03s [ 48/251] Installing cuda-libraries-13- 100% | 0.0 B/s | 124.0 B | 00m00s [ 49/251] Installing cuda-nvml-devel-13 100% | 157.9 MiB/s | 1.4 MiB | 00m00s [ 50/251] Installing cuda-nvrtc-devel-1 100% | 258.2 MiB/s | 244.5 MiB | 00m01s [ 51/251] Installing cuda-cudart-devel- 100% | 284.7 MiB/s | 6.3 MiB | 00m00s [ 52/251] Installing systemd-shared-0:2 100% | 34.9 MiB/s | 4.6 MiB | 00m00s [ 53/251] Installing libnvjpeg-devel-13 100% | 257.0 MiB/s | 6.4 MiB | 00m00s [ 54/251] Installing libnvjitlink-devel 100% | 103.0 MiB/s | 130.0 MiB | 00m01s [ 55/251] Installing libnvfatbin-devel- 100% | 213.0 MiB/s | 2.3 MiB | 00m00s [ 56/251] Installing libnpp-devel-13-0- 100% | 113.0 MiB/s | 184.5 MiB | 00m02s [ 57/251] Installing libcusparse-devel- 100% | 225.9 MiB/s | 348.7 MiB | 00m02s [ 58/251] Installing libcusolver-devel- 100% | 126.7 MiB/s | 180.9 MiB | 00m01s [ 59/251] Installing libcurand-devel-13 100% | 113.6 MiB/s | 129.0 MiB | 00m01s [ 60/251] Installing libcufile-devel-13 100% | 105.0 MiB/s | 27.9 MiB | 00m00s [ 61/251] Installing libcufft-devel-13- 100% | 48.9 MiB/s | 280.5 MiB | 00m06s [ 62/251] Installing libcublas-devel-13 100% | 66.3 MiB/s | 907.6 MiB | 00m14s [ 63/251] Installing cuda-opencl-devel- 100% | 146.7 MiB/s | 751.3 KiB | 00m00s [ 64/251] Installing abattis-cantarell- 100% | 189.9 MiB/s | 194.4 KiB | 00m00s [ 65/251] Installing cpp-0:15.2.1-1.fc4 100% | 341.8 MiB/s | 37.9 MiB | 00m00s [ 66/251] Installing libnsl2-0:2.0.1-3. 100% | 57.6 MiB/s | 59.0 KiB | 00m00s [ 67/251] Installing nss-softokn-freebl 100% | 276.9 MiB/s | 850.5 KiB | 00m00s [ 68/251] Installing nss-softokn-0:3.11 100% | 388.1 MiB/s | 1.9 MiB | 00m00s [ 69/251] Installing nss-sysinit-0:3.11 100% | 1.4 MiB/s | 19.2 KiB | 00m00s [ 70/251] Installing nss-0:3.116.0-1.fc 100% | 144.9 MiB/s | 1.9 MiB | 00m00s [ 71/251] Installing libwayland-cursor- 100% | 37.6 MiB/s | 38.5 KiB | 00m00s [ 72/251] Installing cuda-sandbox-devel 100% | 74.1 MiB/s | 151.7 KiB | 00m00s [ 73/251] Installing annobin-docs-0:12. 100% | 97.7 MiB/s | 100.0 KiB | 00m00s [ 74/251] Installing gdbm-1:1.23-9.fc42 100% | 28.4 MiB/s | 465.2 KiB | 00m00s [ 75/251] Installing cracklib-0:2.9.11- 100% | 16.5 MiB/s | 253.7 KiB | 00m00s [ 76/251] Installing libpwquality-0:1.4 100% | 24.2 MiB/s | 421.6 KiB | 00m00s [ 77/251] Installing authselect-libs-0: 100% | 136.7 MiB/s | 840.0 KiB | 00m00s [ 78/251] Installing OpenCL-ICD-Loader- 100% | 70.1 MiB/s | 71.8 KiB | 00m00s [ 79/251] Installing cmake-filesystem-0 100% | 7.4 MiB/s | 7.6 KiB | 00m00s [ 80/251] Installing zlib-ng-compat-dev 100% | 106.0 MiB/s | 108.5 KiB | 00m00s [ 81/251] Installing libzstd-devel-0:1. 100% | 8.9 MiB/s | 208.8 KiB | 00m00s [ 82/251] Installing elfutils-libelf-de 100% | 2.1 MiB/s | 55.5 KiB | 00m00s [ 83/251] Installing libwebp-0:1.5.0-2. 100% | 309.8 MiB/s | 951.8 KiB | 00m00s [ 84/251] Installing liblerc-0:4.0.0-8. 100% | 311.3 MiB/s | 637.6 KiB | 00m00s [ 85/251] Installing jbigkit-libs-0:2.1 100% | 120.5 MiB/s | 123.4 KiB | 00m00s [ 86/251] Installing libtiff-0:4.7.0-8. 100% | 151.7 MiB/s | 621.3 KiB | 00m00s [ 87/251] Installing kernel-headers-0:6 100% | 220.3 MiB/s | 6.8 MiB | 00m00s [ 88/251] Installing libxcrypt-devel-0: 100% | 16.2 MiB/s | 33.1 KiB | 00m00s [ 89/251] Installing glibc-devel-0:2.41 100% | 19.8 MiB/s | 2.3 MiB | 00m00s [ 90/251] Installing gcc-0:15.2.1-1.fc4 100% | 127.9 MiB/s | 111.3 MiB | 00m01s [ 91/251] Installing python-pip-wheel-0 100% | 622.2 MiB/s | 1.2 MiB | 00m00s [ 92/251] Installing mpdecimal-0:4.0.1- 100% | 17.8 MiB/s | 218.8 KiB | 00m00s >>> Running sysusers scriptlet: dbus-common-1:1.16.0-3.fc42.noarch >>> Finished sysusers scriptlet: dbus-common-1:1.16.0-3.fc42.noarch >>> Scriptlet output: >>> Creating group 'dbus' with GID 81. >>> Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. >>> [ 93/251] Installing dbus-common-1:1.16 100% | 398.6 KiB/s | 13.6 KiB | 00m00s [ 94/251] Installing dbus-broker-0:36-6 100% | 19.0 MiB/s | 389.6 KiB | 00m00s [ 95/251] Installing dbus-1:1.16.0-3.fc 100% | 0.0 B/s | 124.0 B | 00m00s [ 96/251] Installing systemd-pam-0:257. 100% | 68.8 MiB/s | 1.1 MiB | 00m00s >>> Running sysusers scriptlet: systemd-0:257.9-2.fc42.x86_64 >>> Finished sysusers scriptlet: systemd-0:257.9-2.fc42.x86_64 >>> Scriptlet output: >>> Creating group 'systemd-journal' with GID 190. >>> >>> Running sysusers scriptlet: systemd-0:257.9-2.fc42.x86_64 >>> Finished sysusers scriptlet: systemd-0:257.9-2.fc42.x86_64 >>> Scriptlet output: >>> Creating group 'systemd-oom' with GID 999. >>> Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 999 and >>> [ 97/251] Installing systemd-0:257.9-2. 100% | 46.4 MiB/s | 12.3 MiB | 00m00s >>> Running sysusers scriptlet: nvidia-persistenced-3:580.95.05-1.fc42.x86_64 >>> Finished sysusers scriptlet: nvidia-persistenced-3:580.95.05-1.fc42.x86_64 >>> Scriptlet output: >>> Creating group 'nvidia-persistenced' with GID 998. >>> Creating user 'nvidia-persistenced' (NVIDIA Persistence Daemon) with UID 998 >>> [ 98/251] Installing nvidia-persistence 100% | 2.0 MiB/s | 59.1 KiB | 00m00s [ 99/251] Installing tzdata-0:2025b-1.f 100% | 61.0 MiB/s | 1.9 MiB | 00m00s [100/251] Installing libb2-0:0.98.1-13. 100% | 9.2 MiB/s | 47.2 KiB | 00m00s [101/251] Installing python3-libs-0:3.1 100% | 337.0 MiB/s | 40.4 MiB | 00m00s [102/251] Installing python3-0:3.13.7-1 100% | 2.3 MiB/s | 30.5 KiB | 00m00s [103/251] Installing vulkan-loader-0:1. 100% | 261.2 MiB/s | 535.0 KiB | 00m00s [104/251] Installing pixman-0:0.46.2-1. 100% | 347.4 MiB/s | 711.4 KiB | 00m00s [105/251] Installing libdatrie-0:0.2.13 100% | 0.0 B/s | 58.9 KiB | 00m00s [106/251] Installing libthai-0:0.1.29-1 100% | 255.6 MiB/s | 785.2 KiB | 00m00s [107/251] Installing libwayland-egl-0:1 100% | 0.0 B/s | 13.6 KiB | 00m00s [108/251] Installing libicu-0:76.1-4.fc 100% | 399.3 MiB/s | 36.3 MiB | 00m00s [109/251] Installing adwaita-cursor-the 100% | 380.9 MiB/s | 11.4 MiB | 00m00s [110/251] Installing adwaita-icon-theme 100% | 9.6 MiB/s | 2.4 MiB | 00m00s [111/251] Installing adwaita-icon-theme 100% | 18.0 MiB/s | 1.3 MiB | 00m00s [112/251] Installing libusb1-0:1.0.29-4 100% | 2.3 MiB/s | 172.9 KiB | 00m00s [113/251] Installing hwdata-0:0.400-1.f 100% | 35.8 MiB/s | 9.6 MiB | 00m00s [114/251] Installing libpciaccess-0:0.1 100% | 44.8 MiB/s | 45.9 KiB | 00m00s [115/251] Installing libdrm-0:2.4.126-1 100% | 98.6 MiB/s | 403.7 KiB | 00m00s [116/251] Installing libstdc++-devel-0: 100% | 228.4 MiB/s | 16.2 MiB | 00m00s [117/251] Installing gcc-c++-0:15.2.1-1 100% | 68.4 MiB/s | 41.4 MiB | 00m01s [118/251] Installing cuda-nvcc-13-0-0:1 100% | 94.1 MiB/s | 111.0 MiB | 00m01s [119/251] Installing nettle-0:3.10.1-1. 100% | 258.3 MiB/s | 793.6 KiB | 00m00s [120/251] Installing gnutls-0:3.8.10-1. 100% | 349.0 MiB/s | 3.8 MiB | 00m00s [121/251] Installing glib2-0:2.84.4-1.f 100% | 294.1 MiB/s | 14.7 MiB | 00m00s [122/251] Installing json-glib-0:1.10.8 100% | 118.0 MiB/s | 604.4 KiB | 00m00s [123/251] Installing libgusb-0:0.4.9-3. 100% | 159.8 MiB/s | 163.7 KiB | 00m00s [124/251] Installing libcloudproviders- 100% | 61.6 MiB/s | 126.2 KiB | 00m00s [125/251] Installing shared-mime-info-0 100% | 15.3 MiB/s | 2.6 MiB | 00m00s [126/251] Installing gdk-pixbuf2-0:2.42 100% | 109.5 MiB/s | 2.5 MiB | 00m00s [127/251] Installing gtk-update-icon-ca 100% | 4.8 MiB/s | 63.3 KiB | 00m00s [128/251] Installing gdk-pixbuf2-module 100% | 55.3 MiB/s | 56.6 KiB | 00m00s [129/251] Installing avahi-glib-0:0.9~r 100% | 0.0 B/s | 24.4 KiB | 00m00s [130/251] Installing libsoup3-0:3.6.5-6 100% | 191.7 MiB/s | 1.2 MiB | 00m00s [131/251] Installing libtinysparql-0:3. 100% | 324.2 MiB/s | 1.3 MiB | 00m00s [132/251] Installing lksctp-tools-0:1.0 100% | 17.8 MiB/s | 255.4 KiB | 00m00s [133/251] Installing cups-filesystem-1: 100% | 0.0 B/s | 1.8 KiB | 00m00s [134/251] Installing cups-libs-1:2.4.14 100% | 302.9 MiB/s | 620.2 KiB | 00m00s [135/251] Installing spirv-tools-libs-0 100% | 413.0 MiB/s | 5.8 MiB | 00m00s [136/251] Installing llvm-filesystem-0: 100% | 0.0 B/s | 1.1 KiB | 00m00s [137/251] Installing libedit-0:3.1-55.2 100% | 240.0 MiB/s | 245.8 KiB | 00m00s [138/251] Installing llvm-libs-0:20.1.8 100% | 102.4 MiB/s | 137.1 MiB | 00m01s [139/251] Installing libfontenc-0:1.1.8 100% | 70.6 MiB/s | 72.3 KiB | 00m00s [140/251] Installing tzdata-java-0:2025 100% | 12.3 MiB/s | 100.5 KiB | 00m00s [141/251] Installing javapackages-files 100% | 5.4 MiB/s | 5.5 KiB | 00m00s [142/251] Installing java-21-openjdk-he 100% | 121.2 MiB/s | 197.8 MiB | 00m02s [143/251] Installing lcms2-0:2.16-5.fc4 100% | 2.0 MiB/s | 439.3 KiB | 00m00s [144/251] Installing colord-libs-0:1.4. 100% | 166.7 MiB/s | 853.7 KiB | 00m00s [145/251] Installing libepoxy-0:1.5.10- 100% | 15.0 MiB/s | 1.1 MiB | 00m00s [146/251] Installing hicolor-icon-theme 100% | 19.5 MiB/s | 179.5 KiB | 00m00s [147/251] Installing kmod-0:33-3.fc42.x 100% | 11.7 MiB/s | 239.9 KiB | 00m00s [148/251] Installing dkms-0:3.2.2-1.fc4 100% | 6.9 MiB/s | 213.1 KiB | 00m00s >>> Running %post scriptlet: dkms-0:3.2.2-1.fc42.noarch >>> Finished %post scriptlet: dkms-0:3.2.2-1.fc42.noarch >>> Scriptlet output: >>> Created symlink '/etc/systemd/system/multi-user.target.wants/dkms.service' → >>> [149/251] Installing nvidia-modprobe-3: 100% | 577.8 KiB/s | 55.5 KiB | 00m00s [150/251] Installing kmod-nvidia-open-d 100% | 50.3 MiB/s | 120.0 MiB | 00m02s [151/251] Installing nvidia-kmod-common 100% | 55.5 MiB/s | 100.4 MiB | 00m02s >>> Running %post scriptlet: nvidia-kmod-common-3:580.95.05-1.fc42.noarch >>> Finished %post scriptlet: nvidia-kmod-common-3:580.95.05-1.fc42.noarch >>> Scriptlet output: >>> Nvidia driver setup: no bootloader configured. Please run 'nvidia-boot-updat >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> [152/251] Installing opencl-filesystem- 100% | 0.0 B/s | 380.0 B | 00m00s [153/251] Installing nvidia-driver-cuda 100% | 46.3 MiB/s | 1.4 MiB | 00m00s [154/251] Installing cuda-runtime-13-0- 100% | 121.1 KiB/s | 124.0 B | 00m00s [155/251] Installing openssl-1:3.2.6-2. 100% | 39.4 MiB/s | 1.7 MiB | 00m00s [156/251] Installing libX11-common-0:1. 100% | 118.8 MiB/s | 1.2 MiB | 00m00s [157/251] Installing mesa-filesystem-0: 100% | 4.2 MiB/s | 4.3 KiB | 00m00s [158/251] Installing lm_sensors-libs-0: 100% | 84.9 MiB/s | 86.9 KiB | 00m00s [159/251] Installing libxshmfence-0:1.3 100% | 0.0 B/s | 13.6 KiB | 00m00s [160/251] Installing graphite2-0:1.3.14 100% | 11.4 MiB/s | 197.9 KiB | 00m00s [161/251] Installing harfbuzz-0:10.4.0- 100% | 83.3 MiB/s | 2.7 MiB | 00m00s [162/251] Installing freetype-0:2.13.3- 100% | 64.6 MiB/s | 859.9 KiB | 00m00s [163/251] Installing mkfontscale-0:1.2. 100% | 3.2 MiB/s | 46.4 KiB | 00m00s [164/251] Installing ttmkfdir-0:3.0.9-7 100% | 6.5 MiB/s | 119.6 KiB | 00m00s [165/251] Installing google-noto-fonts- 100% | 0.0 B/s | 18.5 KiB | 00m00s [166/251] Installing google-noto-sans-v 100% | 73.2 MiB/s | 1.4 MiB | 00m00s [167/251] Installing default-fonts-core 100% | 8.9 MiB/s | 18.2 KiB | 00m00s [168/251] Installing xkeyboard-config-0 100% | 166.8 MiB/s | 6.7 MiB | 00m00s [169/251] Installing libxkbcommon-0:1.8 100% | 72.1 MiB/s | 369.1 KiB | 00m00s [170/251] Installing libXau-0:1.0.12-2. 100% | 76.6 MiB/s | 78.5 KiB | 00m00s [171/251] Installing libxcb-0:1.17.0-5. 100% | 108.0 MiB/s | 1.1 MiB | 00m00s [172/251] Installing libX11-0:1.8.12-1. 100% | 71.2 MiB/s | 1.3 MiB | 00m00s [173/251] Installing libXext-0:1.3.6-3. 100% | 44.5 MiB/s | 91.2 KiB | 00m00s [174/251] Installing libXrender-0:0.9.1 100% | 25.0 MiB/s | 51.3 KiB | 00m00s [175/251] Installing libXi-0:1.8.2-2.fc 100% | 83.7 MiB/s | 85.7 KiB | 00m00s [176/251] Installing libXtst-0:1.2.5-2. 100% | 33.8 MiB/s | 34.6 KiB | 00m00s [177/251] Installing libXcomposite-0:0. 100% | 22.5 MiB/s | 46.0 KiB | 00m00s [178/251] Installing libXfixes-0:6.0.1- 100% | 30.8 MiB/s | 31.6 KiB | 00m00s [179/251] Installing mesa-libgbm-0:25.1 100% | 20.0 MiB/s | 20.5 KiB | 00m00s [180/251] Installing mesa-dri-drivers-0 100% | 104.6 MiB/s | 46.7 MiB | 00m00s [181/251] Installing mesa-libEGL-0:25.1 100% | 109.3 MiB/s | 335.9 KiB | 00m00s [182/251] Installing libglvnd-egl-1:1.7 100% | 68.7 MiB/s | 70.3 KiB | 00m00s [183/251] Installing libXdamage-0:1.1.6 100% | 0.0 B/s | 45.2 KiB | 00m00s [184/251] Installing libXrandr-0:1.5.4- 100% | 0.0 B/s | 57.0 KiB | 00m00s [185/251] Installing nvidia-libXNVCtrl- 100% | 43.7 MiB/s | 44.8 KiB | 00m00s [186/251] Installing libXxf86vm-0:1.1.6 100% | 29.8 MiB/s | 30.5 KiB | 00m00s [187/251] Installing libvdpau-0:1.5-9.f 100% | 21.9 MiB/s | 22.4 KiB | 00m00s [188/251] Installing libglvnd-glx-1:1.7 100% | 85.2 MiB/s | 610.6 KiB | 00m00s [189/251] Installing mesa-libGL-0:25.1. 100% | 42.9 MiB/s | 307.2 KiB | 00m00s [190/251] Installing egl-wayland-0:1.1. 100% | 41.5 MiB/s | 85.0 KiB | 00m00s [191/251] Installing egl-x11-0:1.0.3-1. 100% | 54.7 MiB/s | 168.1 KiB | 00m00s [192/251] Installing libglvnd-gles-1:1. 100% | 52.4 MiB/s | 107.3 KiB | 00m00s [193/251] Installing egl-gbm-2:1.1.2.1- 100% | 29.9 MiB/s | 30.6 KiB | 00m00s [194/251] Installing nvidia-driver-libs 100% | 70.0 MiB/s | 431.7 MiB | 00m06s [195/251] Installing nvidia-driver-3:58 100% | 125.9 MiB/s | 18.1 MiB | 00m00s >>> Running %post scriptlet: nvidia-driver-3:580.95.05-1.fc42.x86_64 >>> Finished %post scriptlet: nvidia-driver-3:580.95.05-1.fc42.x86_64 >>> Scriptlet output: >>> Created symlink '/etc/systemd/system/systemd-hibernate.service.wants/nvidia- >>> Unit /usr/lib/systemd/system/nvidia-hibernate.service is added as a dependen >>> Created symlink '/etc/systemd/system/multi-user.target.wants/nvidia-powerd.s >>> Created symlink '/etc/systemd/system/systemd-suspend.service.wants/nvidia-re >>> Unit /usr/lib/systemd/system/nvidia-resume.service is added as a dependency >>> Created symlink '/etc/systemd/system/systemd-hibernate.service.wants/nvidia- >>> Unit /usr/lib/systemd/system/nvidia-resume.service is added as a dependency >>> Created symlink '/etc/systemd/system/systemd-suspend-then-hibernate.service. >>> Unit /usr/lib/systemd/system/nvidia-resume.service is added as a dependency >>> Created symlink '/etc/systemd/system/systemd-suspend.service.wants/nvidia-su >>> Unit /usr/lib/systemd/system/nvidia-suspend.service is added as a dependency >>> Created symlink '/etc/systemd/system/systemd-suspend-then-hibernate.service. >>> Unit /usr/lib/systemd/system/nvidia-suspend-then-hibernate.service is added >>> [196/251] Installing libXcursor-0:1.2.3 100% | 57.7 MiB/s | 59.1 KiB | 00m00s [197/251] Installing libXinerama-0:1.1. 100% | 0.0 B/s | 20.1 KiB | 00m00s [198/251] Installing libnvidia-fbc-3:58 100% | 117.4 MiB/s | 240.4 KiB | 00m00s [199/251] Installing xprop-0:1.2.8-3.fc 100% | 3.6 MiB/s | 56.1 KiB | 00m00s [200/251] Installing at-spi2-core-0:2.5 100% | 128.2 MiB/s | 1.5 MiB | 00m00s [201/251] Installing atk-0:2.56.5-1.fc4 100% | 244.2 MiB/s | 250.0 KiB | 00m00s [202/251] Installing at-spi2-atk-0:2.56 100% | 135.6 MiB/s | 277.7 KiB | 00m00s [203/251] Installing libxkbcommon-x11-0 100% | 0.0 B/s | 36.4 KiB | 00m00s [204/251] Installing xcb-util-keysyms-0 100% | 17.4 MiB/s | 17.8 KiB | 00m00s [205/251] Installing xcb-util-renderuti 100% | 25.2 MiB/s | 25.8 KiB | 00m00s [206/251] Installing xcb-util-wm-0:0.4. 100% | 81.3 MiB/s | 83.2 KiB | 00m00s [207/251] Installing xcb-util-0:0.4.1-7 100% | 0.0 B/s | 27.7 KiB | 00m00s [208/251] Installing xcb-util-image-0:0 100% | 0.0 B/s | 23.6 KiB | 00m00s [209/251] Installing libICE-0:1.1.2-2.f 100% | 97.6 MiB/s | 199.8 KiB | 00m00s [210/251] Installing libSM-0:1.2.5-2.fc 100% | 20.8 MiB/s | 106.4 KiB | 00m00s [211/251] Installing xml-common-0:0.6.3 100% | 79.2 MiB/s | 81.1 KiB | 00m00s [212/251] Installing fontconfig-0:2.16. 100% | 736.1 KiB/s | 783.9 KiB | 00m01s [213/251] Installing cairo-0:1.18.2-3.f 100% | 89.2 MiB/s | 1.8 MiB | 00m00s [214/251] Installing cairo-gobject-0:1. 100% | 17.6 MiB/s | 36.0 KiB | 00m00s [215/251] Installing nsight-systems-202 100% | 73.3 MiB/s | 1.0 GiB | 00m14s [216/251] Installing cuda-nsight-system 100% | 169.8 KiB/s | 2.5 KiB | 00m00s [217/251] Installing xorg-x11-fonts-Typ 100% | 799.3 KiB/s | 865.6 KiB | 00m01s [218/251] Installing java-21-openjdk-1: 100% | 34.9 MiB/s | 1.0 MiB | 00m00s [219/251] Installing cuda-nsight-13-0-0 100% | 38.5 MiB/s | 113.2 MiB | 00m03s [220/251] Installing libXft-0:2.3.8-8.f 100% | 15.1 MiB/s | 169.9 KiB | 00m00s [221/251] Installing pango-0:1.56.4-2.f 100% | 11.6 MiB/s | 1.0 MiB | 00m00s [222/251] Installing gtk3-0:3.24.49-2.f 100% | 80.0 MiB/s | 23.1 MiB | 00m00s [223/251] Installing nvidia-settings-3: 100% | 48.8 MiB/s | 1.7 MiB | 00m00s [224/251] Installing nvidia-open-3:580. 100% | 30.3 KiB/s | 124.0 B | 00m00s [225/251] Installing nsight-compute-202 100% | 76.1 MiB/s | 1.2 GiB | 00m16s [226/251] Installing cuda-nsight-comput 100% | 0.0 B/s | 5.9 KiB | 00m00s [227/251] Installing numactl-libs-0:2.0 100% | 52.5 MiB/s | 53.8 KiB | 00m00s [228/251] Installing gds-tools-13-0-0:1 100% | 44.7 MiB/s | 60.0 MiB | 00m01s [229/251] Installing cuda-nvtx-13-0-0:1 100% | 47.6 MiB/s | 633.7 KiB | 00m00s [230/251] Installing cuda-nvdisasm-13-0 100% | 41.2 MiB/s | 4.8 MiB | 00m00s [231/251] Installing cuda-gdb-13-0-0:13 100% | 72.5 MiB/s | 92.0 MiB | 00m01s [232/251] Installing cuda-cupti-13-0-0: 100% | 102.8 MiB/s | 146.3 MiB | 00m01s [233/251] Installing cuda-profiler-api- 100% | 25.8 MiB/s | 79.1 KiB | 00m00s [234/251] Installing cuda-libraries-dev 100% | 0.0 B/s | 124.0 B | 00m00s [235/251] Installing cuda-visual-tools- 100% | 121.1 KiB/s | 124.0 B | 00m00s [236/251] Installing cuda-nvprune-13-0- 100% | 44.5 MiB/s | 182.1 KiB | 00m00s [237/251] Installing cuda-cuxxfilt-13-0 100% | 87.4 MiB/s | 1.0 MiB | 00m00s [238/251] Installing cuda-cuobjdump-13- 100% | 73.4 MiB/s | 751.3 KiB | 00m00s [239/251] Installing cuda-compiler-13-0 100% | 0.0 B/s | 124.0 B | 00m00s [240/251] Installing cuda-documentation 100% | 87.9 MiB/s | 539.9 KiB | 00m00s [241/251] Installing cuda-sanitizer-13- 100% | 76.5 MiB/s | 41.3 MiB | 00m01s [242/251] Installing cuda-command-line- 100% | 121.1 KiB/s | 124.0 B | 00m00s [243/251] Installing cuda-tools-13-0-0: 100% | 0.0 B/s | 124.0 B | 00m00s [244/251] Installing cuda-toolkit-13-0- 100% | 3.9 MiB/s | 4.0 KiB | 00m00s [245/251] Installing cuda-13-0-0:13.0.2 100% | 0.0 B/s | 124.0 B | 00m00s [246/251] Installing cuda-0:13.0.2-1.x8 100% | 40.4 KiB/s | 124.0 B | 00m00s [247/251] Installing gcc-plugin-annobin 100% | 3.2 MiB/s | 58.6 KiB | 00m00s [248/251] Installing annobin-plugin-gcc 100% | 22.6 MiB/s | 995.1 KiB | 00m00s [249/251] Installing authselect-0:1.5.1 100% | 6.2 MiB/s | 158.2 KiB | 00m00s [250/251] Installing pam-0:1.7.0-6.fc42 100% | 50.0 MiB/s | 1.7 MiB | 00m00s [251/251] Installing systemd-rpm-macros 100% | 12.9 KiB/s | 11.3 KiB | 00m01s Warning: skipped OpenPGP checks for 79 packages from repository: https_developer_download_nvidia_com_compute_cuda_repos_distname_releasever_basearch Complete! Finish: installing minimal buildroot with dnf5 Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: INFO: OpenCL-ICD-Loader-3.0.6-2.20241023git5907ac1.fc42.x86_64 abattis-cantarell-vf-fonts-0.301-14.fc42.noarch add-determinism-0.6.0-1.fc42.x86_64 adwaita-cursor-theme-48.1-1.fc42.noarch adwaita-icon-theme-48.1-1.fc42.noarch adwaita-icon-theme-legacy-46.2-3.fc42.noarch alsa-lib-1.2.14-3.fc42.x86_64 alternatives-1.33-1.fc42.x86_64 annobin-docs-12.94-1.fc42.noarch annobin-plugin-gcc-12.94-1.fc42.x86_64 ansible-srpm-macros-1-17.1.fc42.noarch at-spi2-atk-2.56.5-1.fc42.x86_64 at-spi2-core-2.56.5-1.fc42.x86_64 atk-2.56.5-1.fc42.x86_64 audit-libs-4.1.1-1.fc42.x86_64 authselect-1.5.1-1.fc42.x86_64 authselect-libs-1.5.1-1.fc42.x86_64 avahi-glib-0.9~rc2-2.fc42.x86_64 avahi-libs-0.9~rc2-2.fc42.x86_64 basesystem-11-22.fc42.noarch bash-5.2.37-1.fc42.x86_64 binutils-2.44-6.fc42.x86_64 build-reproducibility-srpm-macros-0.6.0-1.fc42.noarch bzip2-1.0.8-20.fc42.x86_64 bzip2-libs-1.0.8-20.fc42.x86_64 ca-certificates-2025.2.80_v9.0.304-1.0.fc42.noarch cairo-1.18.2-3.fc42.x86_64 cairo-gobject-1.18.2-3.fc42.x86_64 cmake-filesystem-3.31.6-2.fc42.x86_64 colord-libs-1.4.7-6.fc42.x86_64 coreutils-9.6-6.fc42.x86_64 coreutils-common-9.6-6.fc42.x86_64 cpio-2.15-4.fc42.x86_64 cpp-15.2.1-1.fc42.x86_64 cracklib-2.9.11-7.fc42.x86_64 crypto-policies-20250707-1.gitad370a8.fc42.noarch cuda-13-0-13.0.2-1.x86_64 cuda-13.0.2-1.x86_64 cuda-cccl-13-0-13.0.85-1.x86_64 cuda-command-line-tools-13-0-13.0.2-1.x86_64 cuda-compiler-13-0-13.0.2-1.x86_64 cuda-crt-13-0-13.0.88-1.x86_64 cuda-cudart-13-0-13.0.96-1.x86_64 cuda-cudart-devel-13-0-13.0.96-1.x86_64 cuda-culibos-devel-13-0-13.0.85-1.x86_64 cuda-cuobjdump-13-0-13.0.85-1.x86_64 cuda-cupti-13-0-13.0.85-1.x86_64 cuda-cuxxfilt-13-0-13.0.85-1.x86_64 cuda-documentation-13-0-13.0.85-1.x86_64 cuda-driver-devel-13-0-13.0.96-1.x86_64 cuda-gdb-13-0-13.0.85-1.x86_64 cuda-libraries-13-0-13.0.2-1.x86_64 cuda-libraries-devel-13-0-13.0.2-1.x86_64 cuda-nsight-13-0-13.0.85-1.x86_64 cuda-nsight-compute-13-0-13.0.2-1.x86_64 cuda-nsight-systems-13-0-13.0.2-1.x86_64 cuda-nvcc-13-0-13.0.88-1.x86_64 cuda-nvdisasm-13-0-13.0.85-1.x86_64 cuda-nvml-devel-13-0-13.0.87-1.x86_64 cuda-nvprune-13-0-13.0.85-1.x86_64 cuda-nvrtc-13-0-13.0.88-1.x86_64 cuda-nvrtc-devel-13-0-13.0.88-1.x86_64 cuda-nvtx-13-0-13.0.85-1.x86_64 cuda-opencl-13-0-13.0.85-1.x86_64 cuda-opencl-devel-13-0-13.0.85-1.x86_64 cuda-profiler-api-13-0-13.0.85-1.x86_64 cuda-runtime-13-0-13.0.2-1.x86_64 cuda-sandbox-devel-13-0-13.0.85-1.x86_64 cuda-sanitizer-13-0-13.0.85-1.x86_64 cuda-toolkit-13-0-13.0.2-1.x86_64 cuda-toolkit-13-0-config-common-13.0.96-1.noarch cuda-toolkit-13-config-common-13.0.96-1.noarch cuda-toolkit-config-common-13.0.96-1.noarch cuda-tools-13-0-13.0.2-1.x86_64 cuda-visual-tools-13-0-13.0.2-1.x86_64 cups-filesystem-2.4.14-2.fc42.noarch cups-libs-2.4.14-2.fc42.x86_64 curl-8.11.1-6.fc42.x86_64 cyrus-sasl-lib-2.1.28-30.fc42.x86_64 dbus-1.16.0-3.fc42.x86_64 dbus-broker-36-6.fc42.x86_64 dbus-common-1.16.0-3.fc42.noarch dbus-libs-1.16.0-3.fc42.x86_64 debugedit-5.1-7.fc42.x86_64 default-fonts-core-sans-4.2-4.fc42.noarch diffutils-3.12-1.fc42.x86_64 dkms-3.2.2-1.fc42.noarch dwz-0.16-1.fc42.x86_64 ed-1.21-2.fc42.x86_64 efi-srpm-macros-6-3.fc42.noarch egl-gbm-1.1.2.1-1.fc42.x86_64 egl-wayland-1.1.20-2.fc42.x86_64 egl-x11-1.0.3-1.fc42.x86_64 elfutils-0.193-2.fc42.x86_64 elfutils-debuginfod-client-0.193-2.fc42.x86_64 elfutils-default-yama-scope-0.193-2.fc42.noarch elfutils-libelf-0.193-2.fc42.x86_64 elfutils-libelf-devel-0.193-2.fc42.x86_64 elfutils-libs-0.193-2.fc42.x86_64 expat-2.7.2-1.fc42.x86_64 fedora-gpg-keys-42-1.noarch fedora-release-42-30.noarch fedora-release-common-42-30.noarch fedora-release-identity-basic-42-30.noarch fedora-repos-42-1.noarch file-5.46-3.fc42.x86_64 file-libs-5.46-3.fc42.x86_64 filesystem-3.18-47.fc42.x86_64 filesystem-srpm-macros-3.18-47.fc42.noarch findutils-4.10.0-5.fc42.x86_64 fontconfig-2.16.0-2.fc42.x86_64 fonts-filesystem-2.0.5-22.fc42.noarch fonts-srpm-macros-2.0.5-22.fc42.noarch forge-srpm-macros-0.4.0-2.fc42.noarch fpc-srpm-macros-1.3-14.fc42.noarch freetype-2.13.3-2.fc42.x86_64 fribidi-1.0.16-2.fc42.x86_64 gawk-5.3.1-1.fc42.x86_64 gcc-15.2.1-1.fc42.x86_64 gcc-c++-15.2.1-1.fc42.x86_64 gcc-plugin-annobin-15.2.1-1.fc42.x86_64 gdb-minimal-16.3-1.fc42.x86_64 gdbm-1.23-9.fc42.x86_64 gdbm-libs-1.23-9.fc42.x86_64 gdk-pixbuf2-2.42.12-12.fc42.x86_64 gdk-pixbuf2-modules-2.42.12-12.fc42.x86_64 gds-tools-13-0-1.15.1.6-1.x86_64 ghc-srpm-macros-1.9.2-2.fc42.noarch glib2-2.84.4-1.fc42.x86_64 glibc-2.41-11.fc42.x86_64 glibc-common-2.41-11.fc42.x86_64 glibc-devel-2.41-11.fc42.x86_64 glibc-gconv-extra-2.41-11.fc42.x86_64 glibc-minimal-langpack-2.41-11.fc42.x86_64 gmp-6.3.0-4.fc42.x86_64 gnat-srpm-macros-6-7.fc42.noarch gnulib-l10n-20241231-1.fc42.noarch gnutls-3.8.10-1.fc42.x86_64 go-srpm-macros-3.8.0-1.fc42.noarch google-noto-fonts-common-20250301-1.fc42.noarch google-noto-sans-vf-fonts-20250301-1.fc42.noarch gpg-pubkey-105ef944-65ca83d1 graphite2-1.3.14-18.fc42.x86_64 grep-3.11-10.fc42.x86_64 gtk-update-icon-cache-3.24.49-2.fc42.x86_64 gtk3-3.24.49-2.fc42.x86_64 gzip-1.13-3.fc42.x86_64 harfbuzz-10.4.0-1.fc42.x86_64 hicolor-icon-theme-0.17-20.fc42.noarch hwdata-0.400-1.fc42.noarch info-7.2-3.fc42.x86_64 jansson-2.14-2.fc42.x86_64 java-21-openjdk-21.0.8.0.9-1.fc42.x86_64 java-21-openjdk-headless-21.0.8.0.9-1.fc42.x86_64 javapackages-filesystem-6.4.0-5.fc42.noarch jbigkit-libs-2.1-31.fc42.x86_64 json-c-0.18-2.fc42.x86_64 json-glib-1.10.8-1.fc42.x86_64 kernel-headers-6.16.2-200.fc42.x86_64 kernel-srpm-macros-1.0-25.fc42.noarch keyutils-libs-1.6.3-5.fc42.x86_64 kmod-33-3.fc42.x86_64 kmod-nvidia-open-dkms-580.95.05-1.fc42.noarch krb5-libs-1.21.3-6.fc42.x86_64 lcms2-2.16-5.fc42.x86_64 libICE-1.1.2-2.fc42.x86_64 libSM-1.2.5-2.fc42.x86_64 libX11-1.8.12-1.fc42.x86_64 libX11-common-1.8.12-1.fc42.noarch libX11-xcb-1.8.12-1.fc42.x86_64 libXau-1.0.12-2.fc42.x86_64 libXcomposite-0.4.6-5.fc42.x86_64 libXcursor-1.2.3-2.fc42.x86_64 libXdamage-1.1.6-5.fc42.x86_64 libXext-1.3.6-3.fc42.x86_64 libXfixes-6.0.1-5.fc42.x86_64 libXft-2.3.8-8.fc42.x86_64 libXi-1.8.2-2.fc42.x86_64 libXinerama-1.1.5-8.fc42.x86_64 libXrandr-1.5.4-5.fc42.x86_64 libXrender-0.9.12-2.fc42.x86_64 libXtst-1.2.5-2.fc42.x86_64 libXxf86vm-1.1.6-2.fc42.x86_64 libacl-2.3.2-3.fc42.x86_64 libarchive-3.8.1-1.fc42.x86_64 libattr-2.5.2-5.fc42.x86_64 libb2-0.98.1-13.fc42.x86_64 libblkid-2.40.4-7.fc42.x86_64 libbrotli-1.1.0-6.fc42.x86_64 libcap-2.73-2.fc42.x86_64 libcap-ng-0.8.5-4.fc42.x86_64 libcloudproviders-0.3.6-1.fc42.x86_64 libcom_err-1.47.2-3.fc42.x86_64 libcublas-13-0-13.1.0.3-1.x86_64 libcublas-devel-13-0-13.1.0.3-1.x86_64 libcufft-13-0-12.0.0.61-1.x86_64 libcufft-devel-13-0-12.0.0.61-1.x86_64 libcufile-13-0-1.15.1.6-1.x86_64 libcufile-devel-13-0-1.15.1.6-1.x86_64 libcurand-13-0-10.4.0.35-1.x86_64 libcurand-devel-13-0-10.4.0.35-1.x86_64 libcurl-8.11.1-6.fc42.x86_64 libcusolver-13-0-12.0.4.66-1.x86_64 libcusolver-devel-13-0-12.0.4.66-1.x86_64 libcusparse-13-0-12.6.3.3-1.x86_64 libcusparse-devel-13-0-12.6.3.3-1.x86_64 libdatrie-0.2.13-11.fc42.x86_64 libdrm-2.4.126-1.fc42.x86_64 libeconf-0.7.6-2.fc42.x86_64 libedit-3.1-55.20250104cvs.fc42.x86_64 libepoxy-1.5.10-9.fc42.x86_64 libevent-2.1.12-15.fc42.x86_64 libfdisk-2.40.4-7.fc42.x86_64 libffi-3.4.6-5.fc42.x86_64 libfontenc-1.1.8-3.fc42.x86_64 libgcc-15.2.1-1.fc42.x86_64 libglvnd-1.7.0-7.fc42.x86_64 libglvnd-egl-1.7.0-7.fc42.x86_64 libglvnd-gles-1.7.0-7.fc42.x86_64 libglvnd-glx-1.7.0-7.fc42.x86_64 libglvnd-opengl-1.7.0-7.fc42.x86_64 libgomp-15.2.1-1.fc42.x86_64 libgusb-0.4.9-3.fc42.x86_64 libicu-76.1-4.fc42.x86_64 libidn2-2.3.8-1.fc42.x86_64 libjpeg-turbo-3.1.2-1.fc42.x86_64 liblerc-4.0.0-8.fc42.x86_64 libmount-2.40.4-7.fc42.x86_64 libmpc-1.3.1-7.fc42.x86_64 libnghttp2-1.64.0-3.fc42.x86_64 libnpp-13-0-13.0.1.2-1.x86_64 libnpp-devel-13-0-13.0.1.2-1.x86_64 libnsl2-2.0.1-3.fc42.x86_64 libnvfatbin-13-0-13.0.85-1.x86_64 libnvfatbin-devel-13-0-13.0.85-1.x86_64 libnvidia-cfg-580.95.05-1.fc42.x86_64 libnvidia-fbc-580.95.05-1.fc42.x86_64 libnvidia-gpucomp-580.95.05-1.fc42.x86_64 libnvidia-ml-580.95.05-1.fc42.x86_64 libnvjitlink-13-0-13.0.88-1.x86_64 libnvjitlink-devel-13-0-13.0.88-1.x86_64 libnvjpeg-13-0-13.0.1.86-1.x86_64 libnvjpeg-devel-13-0-13.0.1.86-1.x86_64 libnvptxcompiler-13-0-13.0.88-1.x86_64 libnvvm-13-0-13.0.88-1.x86_64 libpciaccess-0.16-15.fc42.x86_64 libpkgconf-2.3.0-2.fc42.x86_64 libpng-1.6.44-2.fc42.x86_64 libpsl-0.21.5-5.fc42.x86_64 libpwquality-1.4.5-12.fc42.x86_64 libseccomp-2.5.5-2.fc41.x86_64 libselinux-3.8-3.fc42.x86_64 libsemanage-3.8.1-2.fc42.x86_64 libsepol-3.8-1.fc42.x86_64 libsmartcols-2.40.4-7.fc42.x86_64 libsoup3-3.6.5-6.fc42.x86_64 libssh-0.11.3-1.fc42.x86_64 libssh-config-0.11.3-1.fc42.noarch libstdc++-15.2.1-1.fc42.x86_64 libstdc++-devel-15.2.1-1.fc42.x86_64 libtasn1-4.20.0-1.fc42.x86_64 libthai-0.1.29-10.fc42.x86_64 libtiff-4.7.0-8.fc42.x86_64 libtinysparql-3.9.2-1.fc42.x86_64 libtirpc-1.3.7-0.fc42.x86_64 libtool-ltdl-2.5.4-4.fc42.x86_64 libunistring-1.1-9.fc42.x86_64 libusb1-1.0.29-4.fc42.x86_64 libuuid-2.40.4-7.fc42.x86_64 libvdpau-1.5-9.fc42.x86_64 libverto-0.3.2-10.fc42.x86_64 libwayland-client-1.24.0-1.fc42.x86_64 libwayland-cursor-1.24.0-1.fc42.x86_64 libwayland-egl-1.24.0-1.fc42.x86_64 libwayland-server-1.24.0-1.fc42.x86_64 libwebp-1.5.0-2.fc42.x86_64 libxcb-1.17.0-5.fc42.x86_64 libxcrypt-4.4.38-7.fc42.x86_64 libxcrypt-devel-4.4.38-7.fc42.x86_64 libxkbcommon-1.8.1-1.fc42.x86_64 libxkbcommon-x11-1.8.1-1.fc42.x86_64 libxml2-2.12.10-1.fc42.x86_64 libxshmfence-1.3.2-6.fc42.x86_64 libzstd-1.5.7-1.fc42.x86_64 libzstd-devel-1.5.7-1.fc42.x86_64 lksctp-tools-1.0.21-1.fc42.x86_64 llvm-filesystem-20.1.8-4.fc42.x86_64 llvm-libs-20.1.8-4.fc42.x86_64 lm_sensors-libs-3.6.0-22.fc42.x86_64 lua-libs-5.4.8-1.fc42.x86_64 lua-srpm-macros-1-15.fc42.noarch lz4-libs-1.10.0-2.fc42.x86_64 make-4.4.1-10.fc42.x86_64 mesa-dri-drivers-25.1.9-1.fc42.x86_64 mesa-filesystem-25.1.9-1.fc42.x86_64 mesa-libEGL-25.1.9-1.fc42.x86_64 mesa-libGL-25.1.9-1.fc42.x86_64 mesa-libgbm-25.1.9-1.fc42.x86_64 mkfontscale-1.2.3-2.fc42.x86_64 mpdecimal-4.0.1-1.fc42.x86_64 mpfr-4.2.2-1.fc42.x86_64 ncurses-base-6.5-5.20250125.fc42.noarch ncurses-libs-6.5-5.20250125.fc42.x86_64 nettle-3.10.1-1.fc42.x86_64 nsight-compute-2025.3.1-2025.3.1.4-1.x86_64 nsight-systems-2025.3.2-2025.3.2.474_253236389321v0-0.x86_64 nspr-4.37.0-3.fc42.x86_64 nss-3.116.0-1.fc42.x86_64 nss-softokn-3.116.0-1.fc42.x86_64 nss-softokn-freebl-3.116.0-1.fc42.x86_64 nss-sysinit-3.116.0-1.fc42.x86_64 nss-util-3.116.0-1.fc42.x86_64 numactl-libs-2.0.19-2.fc42.x86_64 nvidia-driver-580.95.05-1.fc42.x86_64 nvidia-driver-cuda-580.95.05-1.fc42.x86_64 nvidia-driver-cuda-libs-580.95.05-1.fc42.x86_64 nvidia-driver-libs-580.95.05-1.fc42.x86_64 nvidia-kmod-common-580.95.05-1.fc42.noarch nvidia-libXNVCtrl-580.95.05-1.fc42.x86_64 nvidia-modprobe-580.95.05-1.fc42.x86_64 nvidia-open-580.95.05-1.fc42.noarch nvidia-persistenced-580.95.05-1.fc42.x86_64 nvidia-settings-580.95.05-1.fc42.x86_64 ocaml-srpm-macros-10-4.fc42.noarch openblas-srpm-macros-2-19.fc42.noarch opencl-filesystem-1.0-22.fc42.noarch openldap-2.6.10-1.fc42.x86_64 openssl-3.2.6-2.fc42.x86_64 openssl-libs-3.2.6-2.fc42.x86_64 p11-kit-0.25.8-1.fc42.x86_64 p11-kit-trust-0.25.8-1.fc42.x86_64 package-notes-srpm-macros-0.5-13.fc42.noarch pam-1.7.0-6.fc42.x86_64 pam-libs-1.7.0-6.fc42.x86_64 pango-1.56.4-2.fc42.x86_64 patch-2.8-1.fc42.x86_64 pcre2-10.45-1.fc42.x86_64 pcre2-syntax-10.45-1.fc42.noarch perl-srpm-macros-1-57.fc42.noarch pixman-0.46.2-1.fc42.x86_64 pkgconf-2.3.0-2.fc42.x86_64 pkgconf-m4-2.3.0-2.fc42.noarch pkgconf-pkg-config-2.3.0-2.fc42.x86_64 popt-1.19-8.fc42.x86_64 publicsuffix-list-dafsa-20250616-1.fc42.noarch pyproject-srpm-macros-1.18.4-1.fc42.noarch python-pip-wheel-24.3.1-5.fc42.noarch python-srpm-macros-3.13-5.fc42.noarch python3-3.13.7-1.fc42.x86_64 python3-libs-3.13.7-1.fc42.x86_64 qt5-srpm-macros-5.15.17-1.fc42.noarch qt6-srpm-macros-6.9.2-1.fc42.noarch readline-8.2-13.fc42.x86_64 redhat-rpm-config-342-4.fc42.noarch rpm-4.20.1-1.fc42.x86_64 rpm-build-4.20.1-1.fc42.x86_64 rpm-build-libs-4.20.1-1.fc42.x86_64 rpm-libs-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 rust-srpm-macros-26.4-1.fc42.noarch sed-4.9-4.fc42.x86_64 setup-2.15.0-13.fc42.noarch shadow-utils-4.17.4-1.fc42.x86_64 shared-mime-info-2.3-7.fc42.x86_64 spirv-tools-libs-2025.2-2.fc42.x86_64 sqlite-libs-3.47.2-5.fc42.x86_64 systemd-257.9-2.fc42.x86_64 systemd-libs-257.9-2.fc42.x86_64 systemd-pam-257.9-2.fc42.x86_64 systemd-rpm-macros-257.9-2.fc42.noarch systemd-shared-257.9-2.fc42.x86_64 systemd-standalone-sysusers-257.9-2.fc42.x86_64 tar-1.35-5.fc42.x86_64 tree-sitter-srpm-macros-0.1.0-8.fc42.noarch ttmkfdir-3.0.9-72.fc42.x86_64 tzdata-2025b-1.fc42.noarch tzdata-java-2025b-1.fc42.noarch unzip-6.0-66.fc42.x86_64 util-linux-2.40.4-7.fc42.x86_64 util-linux-core-2.40.4-7.fc42.x86_64 vulkan-loader-1.4.313.0-1.fc42.x86_64 which-2.23-2.fc42.x86_64 xcb-util-0.4.1-7.fc42.x86_64 xcb-util-image-0.4.1-7.fc42.x86_64 xcb-util-keysyms-0.4.1-7.fc42.x86_64 xcb-util-renderutil-0.3.10-7.fc42.x86_64 xcb-util-wm-0.4.2-7.fc42.x86_64 xkeyboard-config-2.44-1.fc42.noarch xml-common-0.6.3-66.fc42.noarch xorg-x11-fonts-Type1-7.5-40.fc42.noarch xprop-1.2.8-3.fc42.x86_64 xxhash-libs-0.8.3-2.fc42.x86_64 xz-5.8.1-2.fc42.x86_64 xz-libs-5.8.1-2.fc42.x86_64 zig-srpm-macros-1-4.fc42.noarch zip-3.0-43.fc42.x86_64 zlib-ng-compat-2.2.5-2.fc42.x86_64 zlib-ng-compat-devel-2.2.5-2.fc42.x86_64 zstd-1.5.7-1.fc42.x86_64 Start: buildsrpm Start: rpmbuild -bs Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1760486400 Wrote: /builddir/build/SRPMS/ollama-0.12.5-1.fc42.src.rpm Finish: rpmbuild -bs INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-42-x86_64-1760489761.899953/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names Finish: buildsrpm INFO: Done(/var/lib/copr-rpmbuild/workspace/workdir-a5p7eyev/ollama/ollama.spec) Config(child) 14 minutes 58 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot INFO: Start(/var/lib/copr-rpmbuild/results/ollama-0.12.5-1.fc42.src.rpm) Config(fedora-42-x86_64) Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1760489761.899953/root. INFO: reusing tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1760489761.899953/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-1760489761.899953/root. INFO: calling preinit hooks INFO: enabled root cache Start: unpacking root cache Finish: unpacking root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 dnf5-5.2.16.0-1.fc42.x86_64 dnf5-plugins-5.2.16.0-1.fc42.x86_64 Finish: chroot init Start: build phase for ollama-0.12.5-1.fc42.src.rpm Start: build setup for ollama-0.12.5-1.fc42.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1760486400 Wrote: /builddir/build/SRPMS/ollama-0.12.5-1.fc42.src.rpm Updating and loading repositories: Additional repo https_developer_downlo 100% | 10.8 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 4.9 KiB/s | 1.8 KiB | 00m00s fedora 100% | 51.5 KiB/s | 30.3 KiB | 00m01s updates 100% | 64.6 KiB/s | 29.0 KiB | 00m00s Repositories loaded. Package "gcc-c++-15.2.1-1.fc42.x86_64" is already installed. Package "systemd-257.9-2.fc42.x86_64" is already installed. Package Arch Version Repository Size Installing: ccache x86_64 4.10.2-2.fc42 fedora 1.6 MiB cmake x86_64 3.31.6-2.fc42 fedora 34.2 MiB git x86_64 2.51.0-2.fc42 updates 56.4 KiB golang x86_64 1.24.8-1.fc42 updates 8.9 MiB Installing dependencies: cmake-data noarch 3.31.6-2.fc42 fedora 8.5 MiB cmake-rpm-macros noarch 3.31.6-2.fc42 fedora 7.7 KiB emacs-filesystem noarch 1:30.0-4.fc42 fedora 0.0 B fmt x86_64 11.1.4-1.fc42 fedora 263.9 KiB git-core x86_64 2.51.0-2.fc42 updates 23.6 MiB git-core-doc noarch 2.51.0-2.fc42 updates 17.7 MiB go-filesystem x86_64 3.8.0-1.fc42 updates 0.0 B golang-bin x86_64 1.24.8-1.fc42 updates 122.0 MiB golang-src noarch 1.24.8-1.fc42 updates 79.2 MiB groff-base x86_64 1.23.0-8.fc42 fedora 3.9 MiB hiredis x86_64 1.2.0-6.fc42 fedora 105.9 KiB jsoncpp x86_64 1.9.6-1.fc42 fedora 261.6 KiB less x86_64 679-1.fc42 updates 406.1 KiB libcbor x86_64 0.11.0-3.fc42 fedora 77.8 KiB libfido2 x86_64 1.15.0-3.fc42 fedora 242.1 KiB libuv x86_64 1:1.51.0-1.fc42 updates 570.2 KiB ncurses x86_64 6.5-5.20250125.fc42 fedora 608.1 KiB openssh x86_64 9.9p1-11.fc42 updates 1.4 MiB openssh-clients x86_64 9.9p1-11.fc42 updates 2.7 MiB perl-AutoLoader noarch 5.74-519.fc42 updates 20.5 KiB perl-B x86_64 1.89-519.fc42 updates 498.0 KiB perl-Carp noarch 1.54-512.fc42 fedora 46.6 KiB perl-Class-Struct noarch 0.68-519.fc42 updates 25.4 KiB perl-Data-Dumper x86_64 2.189-513.fc42 fedora 115.6 KiB perl-Digest noarch 1.20-512.fc42 fedora 35.3 KiB perl-Digest-MD5 x86_64 2.59-6.fc42 fedora 59.7 KiB perl-DynaLoader x86_64 1.56-519.fc42 updates 32.1 KiB perl-Encode x86_64 4:3.21-512.fc42 fedora 4.7 MiB perl-Errno x86_64 1.38-519.fc42 updates 8.3 KiB perl-Error noarch 1:0.17030-1.fc42 fedora 76.7 KiB perl-Exporter noarch 5.78-512.fc42 fedora 54.3 KiB perl-Fcntl x86_64 1.18-519.fc42 updates 48.9 KiB perl-File-Basename noarch 2.86-519.fc42 updates 14.0 KiB perl-File-Path noarch 2.18-512.fc42 fedora 63.5 KiB perl-File-Temp noarch 1:0.231.100-512.fc42 fedora 162.3 KiB perl-File-stat noarch 1.14-519.fc42 updates 12.5 KiB perl-FileHandle noarch 2.05-519.fc42 updates 9.3 KiB perl-Getopt-Long noarch 1:2.58-3.fc42 fedora 144.5 KiB perl-Getopt-Std noarch 1.14-519.fc42 updates 11.2 KiB perl-Git noarch 2.51.0-2.fc42 updates 64.4 KiB perl-HTTP-Tiny noarch 0.090-2.fc42 fedora 154.4 KiB perl-IO x86_64 1.55-519.fc42 updates 147.0 KiB perl-IO-Socket-IP noarch 0.43-2.fc42 fedora 100.3 KiB perl-IO-Socket-SSL noarch 2.089-2.fc42 fedora 703.3 KiB perl-IPC-Open3 noarch 1.22-519.fc42 updates 22.5 KiB perl-MIME-Base32 noarch 1.303-23.fc42 fedora 30.7 KiB perl-MIME-Base64 x86_64 3.16-512.fc42 fedora 42.0 KiB perl-Net-SSLeay x86_64 1.94-8.fc42 fedora 1.3 MiB perl-POSIX x86_64 2.20-519.fc42 updates 231.0 KiB perl-PathTools x86_64 3.91-513.fc42 fedora 180.0 KiB perl-Pod-Escapes noarch 1:1.07-512.fc42 fedora 24.9 KiB perl-Pod-Perldoc noarch 3.28.01-513.fc42 fedora 163.7 KiB perl-Pod-Simple noarch 1:3.45-512.fc42 fedora 560.8 KiB perl-Pod-Usage noarch 4:2.05-1.fc42 fedora 86.3 KiB perl-Scalar-List-Utils x86_64 5:1.70-1.fc42 updates 144.9 KiB perl-SelectSaver noarch 1.02-519.fc42 updates 2.2 KiB perl-Socket x86_64 4:2.038-512.fc42 fedora 119.9 KiB perl-Storable x86_64 1:3.32-512.fc42 fedora 232.3 KiB perl-Symbol noarch 1.09-519.fc42 updates 6.8 KiB perl-Term-ANSIColor noarch 5.01-513.fc42 fedora 97.5 KiB perl-Term-Cap noarch 1.18-512.fc42 fedora 29.3 KiB perl-TermReadKey x86_64 2.38-24.fc42 fedora 64.0 KiB perl-Text-ParseWords noarch 3.31-512.fc42 fedora 13.6 KiB perl-Text-Tabs+Wrap noarch 2024.001-512.fc42 fedora 22.6 KiB perl-Time-Local noarch 2:1.350-512.fc42 fedora 68.9 KiB perl-URI noarch 5.31-2.fc42 fedora 257.0 KiB perl-base noarch 2.27-519.fc42 updates 12.5 KiB perl-constant noarch 1.33-513.fc42 fedora 26.2 KiB perl-if noarch 0.61.000-519.fc42 updates 5.8 KiB perl-interpreter x86_64 4:5.40.3-519.fc42 updates 118.4 KiB perl-lib x86_64 0.65-519.fc42 updates 8.5 KiB perl-libnet noarch 3.15-513.fc42 fedora 289.4 KiB perl-libs x86_64 4:5.40.3-519.fc42 updates 9.8 MiB perl-locale noarch 1.12-519.fc42 updates 6.5 KiB perl-mro x86_64 1.29-519.fc42 updates 41.5 KiB perl-overload noarch 1.37-519.fc42 updates 71.5 KiB perl-overloading noarch 0.02-519.fc42 updates 4.8 KiB perl-parent noarch 1:0.244-2.fc42 fedora 10.3 KiB perl-podlators noarch 1:6.0.2-3.fc42 fedora 317.5 KiB perl-vars noarch 1.05-519.fc42 updates 3.9 KiB rhash x86_64 1.4.5-2.fc42 fedora 351.0 KiB vim-filesystem noarch 2:9.1.1818-1.fc42 updates 40.0 B Transaction Summary: Installing: 86 packages Total size of inbound packages is 77 MiB. Need to download 0 B. After this operation, 328 MiB extra will be used (install 328 MiB, remove 0 B). [ 1/86] ccache-0:4.10.2-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 2/86] cmake-0:3.31.6-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 3/86] git-0:2.51.0-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 4/86] golang-0:1.24.8-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 5/86] fmt-0:11.1.4-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 6/86] hiredis-0:1.2.0-6.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 7/86] cmake-data-0:3.31.6-2.fc42.noar 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 8/86] jsoncpp-0:1.9.6-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [ 9/86] rhash-0:1.4.5-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [10/86] perl-Getopt-Long-1:2.58-3.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [11/86] perl-PathTools-0:3.91-513.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [12/86] perl-TermReadKey-0:2.38-24.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [13/86] git-core-0:2.51.0-2.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [14/86] git-core-doc-0:2.51.0-2.fc42.no 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [15/86] perl-Git-0:2.51.0-2.fc42.noarch 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [16/86] golang-bin-0:1.24.8-1.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [17/86] golang-src-0:1.24.8-1.fc42.noar 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [18/86] emacs-filesystem-1:30.0-4.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [19/86] perl-Exporter-0:5.78-512.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [20/86] perl-Pod-Usage-4:2.05-1.fc42.no 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [21/86] perl-Text-ParseWords-0:3.31-512 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [22/86] perl-constant-0:1.33-513.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [23/86] perl-Carp-0:1.54-512.fc42.noarc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [24/86] perl-Error-1:0.17030-1.fc42.noa 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [25/86] perl-Pod-Perldoc-0:3.28.01-513. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [26/86] perl-podlators-1:6.0.2-3.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [27/86] groff-base-0:1.23.0-8.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [28/86] perl-File-Temp-1:0.231.100-512. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [29/86] perl-HTTP-Tiny-0:0.090-2.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [30/86] perl-Pod-Simple-1:3.45-512.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [31/86] perl-parent-1:0.244-2.fc42.noar 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [32/86] perl-Term-ANSIColor-0:5.01-513. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [33/86] perl-Term-Cap-0:1.18-512.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [34/86] perl-File-Path-0:2.18-512.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [35/86] perl-IO-Socket-SSL-0:2.089-2.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [36/86] perl-MIME-Base64-0:3.16-512.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [37/86] perl-Net-SSLeay-0:1.94-8.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [38/86] perl-Socket-4:2.038-512.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [39/86] perl-Time-Local-2:1.350-512.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [40/86] perl-Pod-Escapes-1:1.07-512.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [41/86] perl-Text-Tabs+Wrap-0:2024.001- 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [42/86] ncurses-0:6.5-5.20250125.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [43/86] perl-IO-Socket-IP-0:0.43-2.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [44/86] perl-URI-0:5.31-2.fc42.noarch 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [45/86] perl-Data-Dumper-0:2.189-513.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [46/86] perl-MIME-Base32-0:1.303-23.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [47/86] perl-libnet-0:3.15-513.fc42.noa 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [48/86] perl-Digest-MD5-0:2.59-6.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [49/86] perl-Digest-0:1.20-512.fc42.noa 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [50/86] perl-libs-4:5.40.3-519.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [51/86] perl-interpreter-4:5.40.3-519.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [52/86] perl-Errno-0:1.38-519.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [53/86] go-filesystem-0:3.8.0-1.fc42.x8 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [54/86] less-0:679-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [55/86] openssh-clients-0:9.9p1-11.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [56/86] libfido2-0:1.15.0-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [57/86] openssh-0:9.9p1-11.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [58/86] libcbor-0:0.11.0-3.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [59/86] perl-File-Basename-0:2.86-519.f 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [60/86] perl-IPC-Open3-0:1.22-519.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [61/86] perl-lib-0:0.65-519.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [62/86] perl-Encode-4:3.21-512.fc42.x86 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [63/86] perl-Storable-1:3.32-512.fc42.x 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [64/86] perl-POSIX-0:2.20-519.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [65/86] perl-Fcntl-0:1.18-519.fc42.x86_ 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [66/86] perl-FileHandle-0:2.05-519.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [67/86] perl-IO-0:1.55-519.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [68/86] perl-Symbol-0:1.09-519.fc42.noa 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [69/86] perl-Scalar-List-Utils-5:1.70-1 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [70/86] perl-base-0:2.27-519.fc42.noarc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [71/86] perl-overload-0:1.37-519.fc42.n 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [72/86] perl-DynaLoader-0:1.56-519.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [73/86] perl-vars-0:1.05-519.fc42.noarc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [74/86] perl-if-0:0.61.000-519.fc42.noa 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [75/86] perl-AutoLoader-0:5.74-519.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [76/86] perl-Getopt-Std-0:1.14-519.fc42 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [77/86] perl-B-0:1.89-519.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [78/86] vim-filesystem-2:9.1.1818-1.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [79/86] libuv-1:1.51.0-1.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [80/86] perl-mro-0:1.29-519.fc42.x86_64 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [81/86] perl-overloading-0:0.02-519.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [82/86] perl-locale-0:1.12-519.fc42.noa 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [83/86] perl-File-stat-0:1.14-519.fc42. 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [84/86] perl-SelectSaver-0:1.02-519.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [85/86] perl-Class-Struct-0:0.68-519.fc 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded [86/86] cmake-rpm-macros-0:3.31.6-2.fc4 100% | 0.0 B/s | 0.0 B | 00m00s >>> Already downloaded -------------------------------------------------------------------------------- [86/86] Total 100% | 0.0 B/s | 0.0 B | 00m00s Running transaction [ 1/88] Verify package files 100% | 355.0 B/s | 86.0 B | 00m00s [ 2/88] Prepare transaction 100% | 886.0 B/s | 86.0 B | 00m00s [ 3/88] Installing cmake-rpm-macros-0:3 100% | 4.1 MiB/s | 8.3 KiB | 00m00s [ 4/88] Installing libuv-1:1.51.0-1.fc4 100% | 279.8 MiB/s | 573.0 KiB | 00m00s [ 5/88] Installing vim-filesystem-2:9.1 100% | 4.6 MiB/s | 4.7 KiB | 00m00s [ 6/88] Installing libcbor-0:0.11.0-3.f 100% | 77.3 MiB/s | 79.2 KiB | 00m00s [ 7/88] Installing libfido2-0:1.15.0-3. 100% | 237.9 MiB/s | 243.6 KiB | 00m00s [ 8/88] Installing openssh-0:9.9p1-11.f 100% | 81.3 MiB/s | 1.4 MiB | 00m00s [ 9/88] Installing openssh-clients-0:9. 100% | 69.4 MiB/s | 2.7 MiB | 00m00s [10/88] Installing less-0:679-1.fc42.x8 100% | 25.0 MiB/s | 409.4 KiB | 00m00s [11/88] Installing git-core-0:2.51.0-2. 100% | 337.9 MiB/s | 23.7 MiB | 00m00s [12/88] Installing git-core-doc-0:2.51. 100% | 365.1 MiB/s | 17.9 MiB | 00m00s [13/88] Installing go-filesystem-0:3.8. 100% | 0.0 B/s | 392.0 B | 00m00s [14/88] Installing ncurses-0:6.5-5.2025 100% | 26.1 MiB/s | 614.7 KiB | 00m00s [15/88] Installing groff-base-0:1.23.0- 100% | 114.5 MiB/s | 3.9 MiB | 00m00s [16/88] Installing perl-Digest-0:1.20-5 100% | 36.2 MiB/s | 37.1 KiB | 00m00s [17/88] Installing perl-Digest-MD5-0:2. 100% | 60.1 MiB/s | 61.6 KiB | 00m00s [18/88] Installing perl-B-0:1.89-519.fc 100% | 244.8 MiB/s | 501.3 KiB | 00m00s [19/88] Installing perl-FileHandle-0:2. 100% | 0.0 B/s | 9.8 KiB | 00m00s [20/88] Installing perl-MIME-Base32-0:1 100% | 31.4 MiB/s | 32.2 KiB | 00m00s [21/88] Installing perl-Data-Dumper-0:2 100% | 114.7 MiB/s | 117.5 KiB | 00m00s [22/88] Installing perl-libnet-0:3.15-5 100% | 143.9 MiB/s | 294.7 KiB | 00m00s [23/88] Installing perl-AutoLoader-0:5. 100% | 0.0 B/s | 20.9 KiB | 00m00s [24/88] Installing perl-IO-Socket-IP-0: 100% | 99.8 MiB/s | 102.2 KiB | 00m00s [25/88] Installing perl-URI-0:5.31-2.fc 100% | 87.8 MiB/s | 269.6 KiB | 00m00s [26/88] Installing perl-Time-Local-2:1. 100% | 0.0 B/s | 70.6 KiB | 00m00s [27/88] Installing perl-Text-Tabs+Wrap- 100% | 0.0 B/s | 23.9 KiB | 00m00s [28/88] Installing perl-File-Path-0:2.1 100% | 0.0 B/s | 64.5 KiB | 00m00s [29/88] Installing perl-Pod-Escapes-1:1 100% | 0.0 B/s | 25.9 KiB | 00m00s [30/88] Installing perl-if-0:0.61.000-5 100% | 0.0 B/s | 6.2 KiB | 00m00s [31/88] Installing perl-Net-SSLeay-0:1. 100% | 271.7 MiB/s | 1.4 MiB | 00m00s [32/88] Installing perl-locale-0:1.12-5 100% | 0.0 B/s | 6.9 KiB | 00m00s [33/88] Installing perl-IO-Socket-SSL-0 100% | 345.4 MiB/s | 707.4 KiB | 00m00s [34/88] Installing perl-Term-ANSIColor- 100% | 96.9 MiB/s | 99.2 KiB | 00m00s [35/88] Installing perl-Term-Cap-0:1.18 100% | 29.9 MiB/s | 30.6 KiB | 00m00s [36/88] Installing perl-Pod-Simple-1:3. 100% | 278.5 MiB/s | 570.4 KiB | 00m00s [37/88] Installing perl-POSIX-0:2.20-51 100% | 226.9 MiB/s | 232.3 KiB | 00m00s [38/88] Installing perl-File-Temp-1:0.2 100% | 160.2 MiB/s | 164.1 KiB | 00m00s [39/88] Installing perl-IPC-Open3-0:1.2 100% | 0.0 B/s | 23.3 KiB | 00m00s [40/88] Installing perl-HTTP-Tiny-0:0.0 100% | 152.8 MiB/s | 156.4 KiB | 00m00s [41/88] Installing perl-Class-Struct-0: 100% | 0.0 B/s | 25.9 KiB | 00m00s [42/88] Installing perl-Socket-4:2.038- 100% | 119.1 MiB/s | 122.0 KiB | 00m00s [43/88] Installing perl-Symbol-0:1.09-5 100% | 0.0 B/s | 7.2 KiB | 00m00s [44/88] Installing perl-SelectSaver-0:1 100% | 0.0 B/s | 2.6 KiB | 00m00s [45/88] Installing perl-podlators-1:6.0 100% | 22.4 MiB/s | 321.4 KiB | 00m00s [46/88] Installing perl-Pod-Perldoc-0:3 100% | 11.8 MiB/s | 169.2 KiB | 00m00s [47/88] Installing perl-File-stat-0:1.1 100% | 0.0 B/s | 13.1 KiB | 00m00s [48/88] Installing perl-Text-ParseWords 100% | 0.0 B/s | 14.6 KiB | 00m00s [49/88] Installing perl-Fcntl-0:1.18-51 100% | 0.0 B/s | 50.0 KiB | 00m00s [50/88] Installing perl-base-0:2.27-519 100% | 0.0 B/s | 12.9 KiB | 00m00s [51/88] Installing perl-mro-0:1.29-519. 100% | 0.0 B/s | 42.6 KiB | 00m00s [52/88] Installing perl-overloading-0:0 100% | 0.0 B/s | 5.5 KiB | 00m00s [53/88] Installing perl-Pod-Usage-4:2.0 100% | 6.6 MiB/s | 87.9 KiB | 00m00s [54/88] Installing perl-IO-0:1.55-519.f 100% | 147.7 MiB/s | 151.3 KiB | 00m00s [55/88] Installing perl-constant-0:1.33 100% | 0.0 B/s | 27.4 KiB | 00m00s [56/88] Installing perl-parent-1:0.244- 100% | 0.0 B/s | 11.0 KiB | 00m00s [57/88] Installing perl-MIME-Base64-0:3 100% | 43.2 MiB/s | 44.3 KiB | 00m00s [58/88] Installing perl-Errno-0:1.38-51 100% | 0.0 B/s | 8.7 KiB | 00m00s [59/88] Installing perl-File-Basename-0 100% | 0.0 B/s | 14.6 KiB | 00m00s [60/88] Installing perl-Scalar-List-Uti 100% | 145.2 MiB/s | 148.6 KiB | 00m00s [61/88] Installing perl-vars-0:1.05-519 100% | 0.0 B/s | 4.3 KiB | 00m00s [62/88] Installing perl-Getopt-Std-0:1. 100% | 0.0 B/s | 11.7 KiB | 00m00s [63/88] Installing perl-overload-0:1.37 100% | 0.0 B/s | 71.9 KiB | 00m00s [64/88] Installing perl-Storable-1:3.32 100% | 228.4 MiB/s | 233.9 KiB | 00m00s [65/88] Installing perl-Getopt-Long-1:2 100% | 143.8 MiB/s | 147.2 KiB | 00m00s [66/88] Installing perl-Exporter-0:5.78 100% | 0.0 B/s | 55.6 KiB | 00m00s [67/88] Installing perl-Carp-0:1.54-512 100% | 0.0 B/s | 47.7 KiB | 00m00s [68/88] Installing perl-PathTools-0:3.9 100% | 180.2 MiB/s | 184.5 KiB | 00m00s [69/88] Installing perl-DynaLoader-0:1. 100% | 0.0 B/s | 32.5 KiB | 00m00s [70/88] Installing perl-Encode-4:3.21-5 100% | 180.5 MiB/s | 4.7 MiB | 00m00s [71/88] Installing perl-libs-4:5.40.3-5 100% | 274.7 MiB/s | 9.9 MiB | 00m00s [72/88] Installing perl-interpreter-4:5 100% | 9.0 MiB/s | 120.1 KiB | 00m00s [73/88] Installing perl-TermReadKey-0:2 100% | 64.6 MiB/s | 66.2 KiB | 00m00s [74/88] Installing perl-Error-1:0.17030 100% | 78.1 MiB/s | 80.0 KiB | 00m00s [75/88] Installing perl-lib-0:0.65-519. 100% | 0.0 B/s | 8.9 KiB | 00m00s [76/88] Installing perl-Git-0:2.51.0-2. 100% | 0.0 B/s | 65.4 KiB | 00m00s [77/88] Installing git-0:2.51.0-2.fc42. 100% | 0.0 B/s | 57.7 KiB | 00m00s [78/88] Installing emacs-filesystem-1:3 100% | 48.3 KiB/s | 544.0 B | 00m00s [79/88] Installing golang-src-0:1.24.8- 100% | 243.6 MiB/s | 80.2 MiB | 00m00s [80/88] Installing golang-bin-0:1.24.8- 100% | 132.4 MiB/s | 122.1 MiB | 00m01s [81/88] Installing golang-0:1.24.8-1.fc 100% | 308.6 MiB/s | 9.0 MiB | 00m00s [82/88] Installing rhash-0:1.4.5-2.fc42 100% | 23.2 MiB/s | 356.4 KiB | 00m00s [83/88] Installing jsoncpp-0:1.9.6-1.fc 100% | 28.6 MiB/s | 263.1 KiB | 00m00s [84/88] Installing cmake-data-0:3.31.6- 100% | 51.5 MiB/s | 9.1 MiB | 00m00s [85/88] Installing cmake-0:3.31.6-2.fc4 100% | 116.4 MiB/s | 34.2 MiB | 00m00s [86/88] Installing hiredis-0:1.2.0-6.fc 100% | 105.1 MiB/s | 107.6 KiB | 00m00s [87/88] Installing fmt-0:11.1.4-1.fc42. 100% | 18.5 MiB/s | 265.4 KiB | 00m00s [88/88] Installing ccache-0:4.10.2-2.fc 100% | 4.2 MiB/s | 1.5 MiB | 00m00s Complete! Finish: build setup for ollama-0.12.5-1.fc42.src.rpm Start: rpmbuild ollama-0.12.5-1.fc42.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1760486400 Executing(%mkbuilddir): /bin/sh -e /var/tmp/rpm-tmp.XcaDxK Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.Tz9KPR + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + cd /builddir/build/BUILD/ollama-0.12.5-build + rm -rf ollama-0.12.5 + /usr/lib/rpm/rpmuncompress -x -v /builddir/build/SOURCES/v0.12.5.zip TZ=UTC /usr/bin/unzip -u '/builddir/build/SOURCES/v0.12.5.zip' Archive: /builddir/build/SOURCES/v0.12.5.zip 3d32249c749c6f77c1dc8a7cb55ae74fc2f4c08b creating: ollama-0.12.5/ inflating: ollama-0.12.5/.dockerignore inflating: ollama-0.12.5/.gitattributes creating: ollama-0.12.5/.github/ creating: ollama-0.12.5/.github/ISSUE_TEMPLATE/ inflating: ollama-0.12.5/.github/ISSUE_TEMPLATE/10_bug_report.yml inflating: ollama-0.12.5/.github/ISSUE_TEMPLATE/20_feature_request.md inflating: ollama-0.12.5/.github/ISSUE_TEMPLATE/30_model_request.md inflating: ollama-0.12.5/.github/ISSUE_TEMPLATE/config.yml creating: ollama-0.12.5/.github/workflows/ inflating: ollama-0.12.5/.github/workflows/latest.yaml inflating: ollama-0.12.5/.github/workflows/release.yaml inflating: ollama-0.12.5/.github/workflows/test.yaml inflating: ollama-0.12.5/.gitignore inflating: ollama-0.12.5/.golangci.yaml inflating: ollama-0.12.5/CMakeLists.txt inflating: ollama-0.12.5/CMakePresets.json inflating: ollama-0.12.5/CONTRIBUTING.md inflating: ollama-0.12.5/Dockerfile inflating: ollama-0.12.5/LICENSE inflating: ollama-0.12.5/Makefile.sync inflating: ollama-0.12.5/README.md inflating: ollama-0.12.5/SECURITY.md creating: ollama-0.12.5/api/ inflating: ollama-0.12.5/api/client.go inflating: ollama-0.12.5/api/client_test.go creating: ollama-0.12.5/api/examples/ inflating: ollama-0.12.5/api/examples/README.md creating: ollama-0.12.5/api/examples/chat/ inflating: ollama-0.12.5/api/examples/chat/main.go creating: ollama-0.12.5/api/examples/generate-streaming/ inflating: ollama-0.12.5/api/examples/generate-streaming/main.go creating: ollama-0.12.5/api/examples/generate/ inflating: ollama-0.12.5/api/examples/generate/main.go creating: ollama-0.12.5/api/examples/multimodal/ inflating: ollama-0.12.5/api/examples/multimodal/main.go creating: ollama-0.12.5/api/examples/pull-progress/ inflating: ollama-0.12.5/api/examples/pull-progress/main.go inflating: ollama-0.12.5/api/types.go inflating: ollama-0.12.5/api/types_test.go inflating: ollama-0.12.5/api/types_typescript_test.go creating: ollama-0.12.5/app/ extracting: ollama-0.12.5/app/.gitignore inflating: ollama-0.12.5/app/README.md creating: ollama-0.12.5/app/assets/ inflating: ollama-0.12.5/app/assets/app.ico inflating: ollama-0.12.5/app/assets/assets.go inflating: ollama-0.12.5/app/assets/setup.bmp inflating: ollama-0.12.5/app/assets/tray.ico inflating: ollama-0.12.5/app/assets/tray_upgrade.ico creating: ollama-0.12.5/app/lifecycle/ inflating: ollama-0.12.5/app/lifecycle/getstarted_nonwindows.go inflating: ollama-0.12.5/app/lifecycle/getstarted_windows.go inflating: ollama-0.12.5/app/lifecycle/lifecycle.go inflating: ollama-0.12.5/app/lifecycle/logging.go inflating: ollama-0.12.5/app/lifecycle/logging_nonwindows.go inflating: ollama-0.12.5/app/lifecycle/logging_test.go inflating: ollama-0.12.5/app/lifecycle/logging_windows.go inflating: ollama-0.12.5/app/lifecycle/paths.go inflating: ollama-0.12.5/app/lifecycle/server.go inflating: ollama-0.12.5/app/lifecycle/server_unix.go inflating: ollama-0.12.5/app/lifecycle/server_windows.go inflating: ollama-0.12.5/app/lifecycle/updater.go inflating: ollama-0.12.5/app/lifecycle/updater_nonwindows.go inflating: ollama-0.12.5/app/lifecycle/updater_windows.go inflating: ollama-0.12.5/app/main.go inflating: ollama-0.12.5/app/ollama.iss inflating: ollama-0.12.5/app/ollama.rc inflating: ollama-0.12.5/app/ollama_welcome.ps1 creating: ollama-0.12.5/app/store/ inflating: ollama-0.12.5/app/store/store.go inflating: ollama-0.12.5/app/store/store_darwin.go inflating: ollama-0.12.5/app/store/store_linux.go inflating: ollama-0.12.5/app/store/store_windows.go creating: ollama-0.12.5/app/tray/ creating: ollama-0.12.5/app/tray/commontray/ inflating: ollama-0.12.5/app/tray/commontray/types.go inflating: ollama-0.12.5/app/tray/tray.go inflating: ollama-0.12.5/app/tray/tray_nonwindows.go inflating: ollama-0.12.5/app/tray/tray_windows.go creating: ollama-0.12.5/app/tray/wintray/ inflating: ollama-0.12.5/app/tray/wintray/eventloop.go inflating: ollama-0.12.5/app/tray/wintray/menus.go inflating: ollama-0.12.5/app/tray/wintray/messages.go inflating: ollama-0.12.5/app/tray/wintray/notifyicon.go inflating: ollama-0.12.5/app/tray/wintray/tray.go inflating: ollama-0.12.5/app/tray/wintray/w32api.go inflating: ollama-0.12.5/app/tray/wintray/winclass.go creating: ollama-0.12.5/auth/ inflating: ollama-0.12.5/auth/auth.go creating: ollama-0.12.5/cmd/ inflating: ollama-0.12.5/cmd/cmd.go inflating: ollama-0.12.5/cmd/cmd_test.go inflating: ollama-0.12.5/cmd/interactive.go inflating: ollama-0.12.5/cmd/interactive_test.go creating: ollama-0.12.5/cmd/runner/ inflating: ollama-0.12.5/cmd/runner/main.go inflating: ollama-0.12.5/cmd/start.go inflating: ollama-0.12.5/cmd/start_darwin.go inflating: ollama-0.12.5/cmd/start_default.go inflating: ollama-0.12.5/cmd/start_windows.go inflating: ollama-0.12.5/cmd/warn_thinking_test.go creating: ollama-0.12.5/convert/ inflating: ollama-0.12.5/convert/convert.go inflating: ollama-0.12.5/convert/convert_bert.go inflating: ollama-0.12.5/convert/convert_commandr.go inflating: ollama-0.12.5/convert/convert_gemma.go inflating: ollama-0.12.5/convert/convert_gemma2.go inflating: ollama-0.12.5/convert/convert_gemma2_adapter.go inflating: ollama-0.12.5/convert/convert_gemma3.go inflating: ollama-0.12.5/convert/convert_gemma3n.go inflating: ollama-0.12.5/convert/convert_gptoss.go inflating: ollama-0.12.5/convert/convert_llama.go inflating: ollama-0.12.5/convert/convert_llama4.go inflating: ollama-0.12.5/convert/convert_llama_adapter.go inflating: ollama-0.12.5/convert/convert_mistral.go inflating: ollama-0.12.5/convert/convert_mixtral.go inflating: ollama-0.12.5/convert/convert_mllama.go inflating: ollama-0.12.5/convert/convert_phi3.go inflating: ollama-0.12.5/convert/convert_qwen2.go inflating: ollama-0.12.5/convert/convert_qwen25vl.go inflating: ollama-0.12.5/convert/convert_test.go inflating: ollama-0.12.5/convert/reader.go inflating: ollama-0.12.5/convert/reader_safetensors.go inflating: ollama-0.12.5/convert/reader_test.go inflating: ollama-0.12.5/convert/reader_torch.go creating: ollama-0.12.5/convert/sentencepiece/ inflating: ollama-0.12.5/convert/sentencepiece/sentencepiece_model.pb.go inflating: ollama-0.12.5/convert/sentencepiece_model.proto inflating: ollama-0.12.5/convert/tensor.go inflating: ollama-0.12.5/convert/tensor_test.go creating: ollama-0.12.5/convert/testdata/ inflating: ollama-0.12.5/convert/testdata/Meta-Llama-3-8B-Instruct.json inflating: ollama-0.12.5/convert/testdata/Meta-Llama-3.1-8B-Instruct.json inflating: ollama-0.12.5/convert/testdata/Mistral-7B-Instruct-v0.2.json inflating: ollama-0.12.5/convert/testdata/Mixtral-8x7B-Instruct-v0.1.json inflating: ollama-0.12.5/convert/testdata/Phi-3-mini-128k-instruct.json inflating: ollama-0.12.5/convert/testdata/Qwen2.5-0.5B-Instruct.json inflating: ollama-0.12.5/convert/testdata/all-MiniLM-L6-v2.json inflating: ollama-0.12.5/convert/testdata/c4ai-command-r-v01.json inflating: ollama-0.12.5/convert/testdata/gemma-2-2b-it.json inflating: ollama-0.12.5/convert/testdata/gemma-2-9b-it.json inflating: ollama-0.12.5/convert/testdata/gemma-2b-it.json inflating: ollama-0.12.5/convert/tokenizer.go inflating: ollama-0.12.5/convert/tokenizer_spm.go inflating: ollama-0.12.5/convert/tokenizer_test.go creating: ollama-0.12.5/discover/ inflating: ollama-0.12.5/discover/cpu_linux.go inflating: ollama-0.12.5/discover/cpu_linux_test.go inflating: ollama-0.12.5/discover/cpu_windows.go inflating: ollama-0.12.5/discover/cpu_windows_test.go inflating: ollama-0.12.5/discover/gpu.go inflating: ollama-0.12.5/discover/gpu_darwin.go inflating: ollama-0.12.5/discover/gpu_info_darwin.h inflating: ollama-0.12.5/discover/gpu_info_darwin.m inflating: ollama-0.12.5/discover/path.go inflating: ollama-0.12.5/discover/runner.go inflating: ollama-0.12.5/discover/runner_test.go inflating: ollama-0.12.5/discover/types.go creating: ollama-0.12.5/docs/ inflating: ollama-0.12.5/docs/README.md inflating: ollama-0.12.5/docs/api.md inflating: ollama-0.12.5/docs/cloud.md inflating: ollama-0.12.5/docs/development.md inflating: ollama-0.12.5/docs/docker.md inflating: ollama-0.12.5/docs/examples.md inflating: ollama-0.12.5/docs/faq.md inflating: ollama-0.12.5/docs/gpu.md creating: ollama-0.12.5/docs/images/ inflating: ollama-0.12.5/docs/images/ollama-keys.png inflating: ollama-0.12.5/docs/images/signup.png inflating: ollama-0.12.5/docs/import.md inflating: ollama-0.12.5/docs/linux.md inflating: ollama-0.12.5/docs/macos.md inflating: ollama-0.12.5/docs/modelfile.md inflating: ollama-0.12.5/docs/openai.md inflating: ollama-0.12.5/docs/template.md inflating: ollama-0.12.5/docs/troubleshooting.md inflating: ollama-0.12.5/docs/windows.md creating: ollama-0.12.5/envconfig/ inflating: ollama-0.12.5/envconfig/config.go inflating: ollama-0.12.5/envconfig/config_test.go creating: ollama-0.12.5/format/ inflating: ollama-0.12.5/format/bytes.go inflating: ollama-0.12.5/format/bytes_test.go inflating: ollama-0.12.5/format/format.go inflating: ollama-0.12.5/format/format_test.go inflating: ollama-0.12.5/format/time.go inflating: ollama-0.12.5/format/time_test.go creating: ollama-0.12.5/fs/ inflating: ollama-0.12.5/fs/config.go creating: ollama-0.12.5/fs/ggml/ inflating: ollama-0.12.5/fs/ggml/ggml.go inflating: ollama-0.12.5/fs/ggml/ggml_test.go inflating: ollama-0.12.5/fs/ggml/gguf.go inflating: ollama-0.12.5/fs/ggml/gguf_test.go inflating: ollama-0.12.5/fs/ggml/type.go creating: ollama-0.12.5/fs/gguf/ inflating: ollama-0.12.5/fs/gguf/gguf.go inflating: ollama-0.12.5/fs/gguf/gguf_test.go inflating: ollama-0.12.5/fs/gguf/keyvalue.go inflating: ollama-0.12.5/fs/gguf/keyvalue_test.go inflating: ollama-0.12.5/fs/gguf/lazy.go inflating: ollama-0.12.5/fs/gguf/reader.go inflating: ollama-0.12.5/fs/gguf/tensor.go creating: ollama-0.12.5/fs/util/ creating: ollama-0.12.5/fs/util/bufioutil/ inflating: ollama-0.12.5/fs/util/bufioutil/buffer_seeker.go inflating: ollama-0.12.5/fs/util/bufioutil/buffer_seeker_test.go inflating: ollama-0.12.5/go.mod inflating: ollama-0.12.5/go.sum creating: ollama-0.12.5/harmony/ inflating: ollama-0.12.5/harmony/harmonyparser.go inflating: ollama-0.12.5/harmony/harmonyparser_test.go creating: ollama-0.12.5/integration/ inflating: ollama-0.12.5/integration/README.md inflating: ollama-0.12.5/integration/api_test.go inflating: ollama-0.12.5/integration/basic_test.go inflating: ollama-0.12.5/integration/concurrency_test.go inflating: ollama-0.12.5/integration/context_test.go inflating: ollama-0.12.5/integration/embed_test.go inflating: ollama-0.12.5/integration/library_models_test.go inflating: ollama-0.12.5/integration/llm_image_test.go inflating: ollama-0.12.5/integration/max_queue_test.go inflating: ollama-0.12.5/integration/model_arch_test.go inflating: ollama-0.12.5/integration/model_perf_test.go inflating: ollama-0.12.5/integration/quantization_test.go creating: ollama-0.12.5/integration/testdata/ inflating: ollama-0.12.5/integration/testdata/embed.json inflating: ollama-0.12.5/integration/testdata/shakespeare.txt inflating: ollama-0.12.5/integration/utils_test.go creating: ollama-0.12.5/kvcache/ inflating: ollama-0.12.5/kvcache/cache.go inflating: ollama-0.12.5/kvcache/causal.go inflating: ollama-0.12.5/kvcache/causal_test.go inflating: ollama-0.12.5/kvcache/encoder.go inflating: ollama-0.12.5/kvcache/wrapper.go creating: ollama-0.12.5/llama/ extracting: ollama-0.12.5/llama/.gitignore inflating: ollama-0.12.5/llama/README.md inflating: ollama-0.12.5/llama/build-info.cpp inflating: ollama-0.12.5/llama/build-info.cpp.in creating: ollama-0.12.5/llama/llama.cpp/ inflating: ollama-0.12.5/llama/llama.cpp/.rsync-filter inflating: ollama-0.12.5/llama/llama.cpp/LICENSE creating: ollama-0.12.5/llama/llama.cpp/common/ inflating: ollama-0.12.5/llama/llama.cpp/common/base64.hpp inflating: ollama-0.12.5/llama/llama.cpp/common/common.cpp inflating: ollama-0.12.5/llama/llama.cpp/common/common.go inflating: ollama-0.12.5/llama/llama.cpp/common/common.h inflating: ollama-0.12.5/llama/llama.cpp/common/json-schema-to-grammar.cpp inflating: ollama-0.12.5/llama/llama.cpp/common/json-schema-to-grammar.h inflating: ollama-0.12.5/llama/llama.cpp/common/log.cpp inflating: ollama-0.12.5/llama/llama.cpp/common/log.h inflating: ollama-0.12.5/llama/llama.cpp/common/sampling.cpp inflating: ollama-0.12.5/llama/llama.cpp/common/sampling.h creating: ollama-0.12.5/llama/llama.cpp/include/ inflating: ollama-0.12.5/llama/llama.cpp/include/llama-cpp.h inflating: ollama-0.12.5/llama/llama.cpp/include/llama.h creating: ollama-0.12.5/llama/llama.cpp/src/ inflating: ollama-0.12.5/llama/llama.cpp/src/llama-adapter.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-adapter.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-arch.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-arch.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-batch.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-batch.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-chat.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-chat.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-context.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-context.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-cparams.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-cparams.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-grammar.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-grammar.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-graph.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-graph.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-hparams.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-hparams.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-impl.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-impl.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-io.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-io.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cache-iswa.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cache-iswa.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cache.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cache.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cells.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory-hybrid.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory-hybrid.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory-recurrent.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory-recurrent.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-mmap.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-mmap.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model-loader.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model-loader.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model-saver.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model-saver.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-quant.cpp extracting: ollama-0.12.5/llama/llama.cpp/src/llama-quant.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-sampling.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-sampling.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-vocab.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-vocab.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama.go inflating: ollama-0.12.5/llama/llama.cpp/src/unicode-data.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/unicode-data.h inflating: ollama-0.12.5/llama/llama.cpp/src/unicode.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/unicode.h creating: ollama-0.12.5/llama/llama.cpp/tools/ creating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/ inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/clip-impl.h inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/clip.cpp inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/clip.h inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd-audio.cpp inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd-audio.h inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd-helper.cpp inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd-helper.h inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd.cpp inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd.go inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd.h creating: ollama-0.12.5/llama/llama.cpp/vendor/ creating: ollama-0.12.5/llama/llama.cpp/vendor/miniaudio/ inflating: ollama-0.12.5/llama/llama.cpp/vendor/miniaudio/miniaudio.h creating: ollama-0.12.5/llama/llama.cpp/vendor/nlohmann/ inflating: ollama-0.12.5/llama/llama.cpp/vendor/nlohmann/json.hpp inflating: ollama-0.12.5/llama/llama.cpp/vendor/nlohmann/json_fwd.hpp creating: ollama-0.12.5/llama/llama.cpp/vendor/stb/ inflating: ollama-0.12.5/llama/llama.cpp/vendor/stb/stb_image.h inflating: ollama-0.12.5/llama/llama.go inflating: ollama-0.12.5/llama/llama_test.go creating: ollama-0.12.5/llama/patches/ extracting: ollama-0.12.5/llama/patches/.gitignore inflating: ollama-0.12.5/llama/patches/0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch inflating: ollama-0.12.5/llama/patches/0002-pretokenizer.patch inflating: ollama-0.12.5/llama/patches/0003-clip-unicode.patch inflating: ollama-0.12.5/llama/patches/0004-solar-pro.patch inflating: ollama-0.12.5/llama/patches/0005-fix-deepseek-deseret-regex.patch inflating: ollama-0.12.5/llama/patches/0006-maintain-ordering-for-rules-for-grammar.patch inflating: ollama-0.12.5/llama/patches/0007-sort-devices-by-score.patch inflating: ollama-0.12.5/llama/patches/0008-add-phony-target-ggml-cpu-for-all-cpu-variants.patch inflating: ollama-0.12.5/llama/patches/0009-remove-amx.patch inflating: ollama-0.12.5/llama/patches/0010-fix-string-arr-kv-loading.patch inflating: ollama-0.12.5/llama/patches/0011-ollama-debug-tensor.patch inflating: ollama-0.12.5/llama/patches/0012-add-ollama-vocab-for-grammar-support.patch inflating: ollama-0.12.5/llama/patches/0013-add-argsort-and-cuda-copy-for-i32.patch inflating: ollama-0.12.5/llama/patches/0014-graph-memory-reporting-on-failure.patch inflating: ollama-0.12.5/llama/patches/0015-ggml-Export-GPU-UUIDs.patch inflating: ollama-0.12.5/llama/patches/0016-add-C-API-for-mtmd_input_text.patch inflating: ollama-0.12.5/llama/patches/0017-no-power-throttling-win32-with-gnuc.patch inflating: ollama-0.12.5/llama/patches/0018-BF16-macos-version-guard.patch inflating: ollama-0.12.5/llama/patches/0019-Enable-CUDA-Graphs-for-gemma3n.patch inflating: ollama-0.12.5/llama/patches/0020-Disable-ggml-blas-on-macos-v13-and-older.patch inflating: ollama-0.12.5/llama/patches/0021-fix-mtmd-audio.cpp-build-on-windows.patch inflating: ollama-0.12.5/llama/patches/0022-ggml-No-alloc-mode.patch inflating: ollama-0.12.5/llama/patches/0023-decode-disable-output_all.patch inflating: ollama-0.12.5/llama/patches/0024-ggml-Enable-resetting-backend-devices.patch inflating: ollama-0.12.5/llama/patches/0025-harden-uncaught-exception-registration.patch inflating: ollama-0.12.5/llama/patches/0026-GPU-discovery-enhancements.patch inflating: ollama-0.12.5/llama/sampling_ext.cpp inflating: ollama-0.12.5/llama/sampling_ext.h creating: ollama-0.12.5/llm/ inflating: ollama-0.12.5/llm/llm_darwin.go inflating: ollama-0.12.5/llm/llm_linux.go inflating: ollama-0.12.5/llm/llm_windows.go inflating: ollama-0.12.5/llm/memory.go inflating: ollama-0.12.5/llm/memory_test.go inflating: ollama-0.12.5/llm/server.go inflating: ollama-0.12.5/llm/server_test.go inflating: ollama-0.12.5/llm/status.go creating: ollama-0.12.5/logutil/ inflating: ollama-0.12.5/logutil/logutil.go creating: ollama-0.12.5/macapp/ inflating: ollama-0.12.5/macapp/.eslintrc.json inflating: ollama-0.12.5/macapp/.gitignore inflating: ollama-0.12.5/macapp/README.md creating: ollama-0.12.5/macapp/assets/ inflating: ollama-0.12.5/macapp/assets/icon.icns extracting: ollama-0.12.5/macapp/assets/iconDarkTemplate.png extracting: ollama-0.12.5/macapp/assets/iconDarkTemplate@2x.png extracting: ollama-0.12.5/macapp/assets/iconDarkUpdateTemplate.png extracting: ollama-0.12.5/macapp/assets/iconDarkUpdateTemplate@2x.png extracting: ollama-0.12.5/macapp/assets/iconTemplate.png extracting: ollama-0.12.5/macapp/assets/iconTemplate@2x.png extracting: ollama-0.12.5/macapp/assets/iconUpdateTemplate.png extracting: ollama-0.12.5/macapp/assets/iconUpdateTemplate@2x.png inflating: ollama-0.12.5/macapp/forge.config.ts inflating: ollama-0.12.5/macapp/package-lock.json inflating: ollama-0.12.5/macapp/package.json inflating: ollama-0.12.5/macapp/postcss.config.js creating: ollama-0.12.5/macapp/src/ inflating: ollama-0.12.5/macapp/src/app.css inflating: ollama-0.12.5/macapp/src/app.tsx inflating: ollama-0.12.5/macapp/src/declarations.d.ts inflating: ollama-0.12.5/macapp/src/index.html inflating: ollama-0.12.5/macapp/src/index.ts inflating: ollama-0.12.5/macapp/src/install.ts inflating: ollama-0.12.5/macapp/src/ollama.svg extracting: ollama-0.12.5/macapp/src/preload.ts inflating: ollama-0.12.5/macapp/src/renderer.tsx inflating: ollama-0.12.5/macapp/tailwind.config.js inflating: ollama-0.12.5/macapp/tsconfig.json inflating: ollama-0.12.5/macapp/webpack.main.config.ts inflating: ollama-0.12.5/macapp/webpack.plugins.ts inflating: ollama-0.12.5/macapp/webpack.renderer.config.ts inflating: ollama-0.12.5/macapp/webpack.rules.ts inflating: ollama-0.12.5/main.go creating: ollama-0.12.5/middleware/ inflating: ollama-0.12.5/middleware/openai.go inflating: ollama-0.12.5/middleware/openai_test.go creating: ollama-0.12.5/ml/ inflating: ollama-0.12.5/ml/backend.go creating: ollama-0.12.5/ml/backend/ inflating: ollama-0.12.5/ml/backend/backend.go creating: ollama-0.12.5/ml/backend/ggml/ inflating: ollama-0.12.5/ml/backend/ggml/ggml.go creating: ollama-0.12.5/ml/backend/ggml/ggml/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/.rsync-filter inflating: ollama-0.12.5/ml/backend/ggml/ggml/LICENSE creating: ollama-0.12.5/ml/backend/ggml/ggml/cmake/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/cmake/common.cmake creating: ollama-0.12.5/ml/backend/ggml/ggml/include/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-alloc.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-backend.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-blas.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-cann.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-cpp.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-cpu.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-cuda.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-metal.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-opencl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-opt.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-rpc.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-sycl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-vulkan.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-zdnn.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/gguf.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ollama-debug.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-alloc.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-backend-impl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-backend-reg.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-backend.cpp creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-blas/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-blas/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-blas/blas.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-blas/ggml-blas.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-common.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/CMakeLists.txt creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/common.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch-fallback.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/ creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/arm.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/cpu-feats.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/quants.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/repack.cpp creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/x86.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/cpu.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/cpu_amd64.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/cpu_arm64.go extracting: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/cpu_debug.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/hbm.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/llamafile.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/simd-mappings.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/acc.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/acc.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/add-id.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/add-id.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/arange.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/arange.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/argmax.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/argmax.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/argsort.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/argsort.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/clamp.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/clamp.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/common.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/concat.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/concat.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv-transpose-1d.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv-transpose-1d.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-dw.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-dw.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-transpose.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-transpose.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/convert.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/convert.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/count-equal.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/count-equal.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cp-async.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cpy-utils.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cpy.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cross-entropy-loss.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cross-entropy-loss.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/dequantize.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/diagmask.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/diagmask.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-common.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-mma-f16.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-vec.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/getrows.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/getrows.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/gla.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/gla.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/im2col.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/im2col.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mean.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mean.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mma.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmf.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmf.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmvf.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmvf.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmvq.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmvq.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/norm.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/norm.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-adamw.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-adamw.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-sgd.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-sgd.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/out-prod.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/out-prod.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pad.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pad.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pad_reflect_1d.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pad_reflect_1d.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pool2d.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pool2d.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/quantize.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/quantize.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/reduce_rows.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/roll.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/roll.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/rope.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/rope.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/scale.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/scale.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/set-rows.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/set-rows.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/softcap.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/softcap.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/softmax.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/softmax.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ssm-conv.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ssm-conv.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ssm-scan.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ssm-scan.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/sum.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/sum.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/sumrows.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/sumrows.cuh creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_10.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_11.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_12.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_13.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_14.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_15.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_3.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_5.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_6.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_7.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_9.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq1_s.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_s.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_s.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-mxfp4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q2_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q3_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q6_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/topk-moe.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/topk-moe.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/tsembd.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/tsembd.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/unary.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/unary.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/upscale.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/upscale.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vecdotq.cuh creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vendors/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vendors/cuda.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vendors/hip.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vendors/musa.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/wkv.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/wkv.cuh creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-hip/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-hip/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-common.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-common.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-context.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-context.m inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-device.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-device.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-device.m inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.s inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-impl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-ops.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-ops.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/metal.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-opt.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-threading.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-threading.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml_darwin_arm64.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/gguf.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/mem_hip.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/mem_nvml.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ollama-debug.c inflating: ollama-0.12.5/ml/backend/ggml/quantization.go inflating: ollama-0.12.5/ml/backend/ggml/threads.go inflating: ollama-0.12.5/ml/backend/ggml/threads_debug.go inflating: ollama-0.12.5/ml/device.go creating: ollama-0.12.5/ml/nn/ inflating: ollama-0.12.5/ml/nn/attention.go inflating: ollama-0.12.5/ml/nn/convolution.go inflating: ollama-0.12.5/ml/nn/embedding.go creating: ollama-0.12.5/ml/nn/fast/ inflating: ollama-0.12.5/ml/nn/fast/rope.go inflating: ollama-0.12.5/ml/nn/linear.go inflating: ollama-0.12.5/ml/nn/normalization.go creating: ollama-0.12.5/ml/nn/pooling/ inflating: ollama-0.12.5/ml/nn/pooling/pooling.go inflating: ollama-0.12.5/ml/nn/pooling/pooling_test.go creating: ollama-0.12.5/ml/nn/rope/ inflating: ollama-0.12.5/ml/nn/rope/rope.go creating: ollama-0.12.5/model/ inflating: ollama-0.12.5/model/bytepairencoding.go inflating: ollama-0.12.5/model/bytepairencoding_test.go creating: ollama-0.12.5/model/imageproc/ inflating: ollama-0.12.5/model/imageproc/images.go inflating: ollama-0.12.5/model/imageproc/images_test.go creating: ollama-0.12.5/model/input/ inflating: ollama-0.12.5/model/input/input.go inflating: ollama-0.12.5/model/model.go inflating: ollama-0.12.5/model/model_test.go creating: ollama-0.12.5/model/models/ creating: ollama-0.12.5/model/models/bert/ inflating: ollama-0.12.5/model/models/bert/embed.go creating: ollama-0.12.5/model/models/deepseek2/ inflating: ollama-0.12.5/model/models/deepseek2/model.go creating: ollama-0.12.5/model/models/gemma2/ inflating: ollama-0.12.5/model/models/gemma2/model.go creating: ollama-0.12.5/model/models/gemma3/ inflating: ollama-0.12.5/model/models/gemma3/embed.go inflating: ollama-0.12.5/model/models/gemma3/model.go inflating: ollama-0.12.5/model/models/gemma3/model_text.go inflating: ollama-0.12.5/model/models/gemma3/model_vision.go inflating: ollama-0.12.5/model/models/gemma3/process_image.go creating: ollama-0.12.5/model/models/gemma3n/ inflating: ollama-0.12.5/model/models/gemma3n/model.go inflating: ollama-0.12.5/model/models/gemma3n/model_text.go creating: ollama-0.12.5/model/models/gptoss/ inflating: ollama-0.12.5/model/models/gptoss/model.go creating: ollama-0.12.5/model/models/llama/ inflating: ollama-0.12.5/model/models/llama/model.go creating: ollama-0.12.5/model/models/llama4/ inflating: ollama-0.12.5/model/models/llama4/model.go inflating: ollama-0.12.5/model/models/llama4/model_text.go inflating: ollama-0.12.5/model/models/llama4/model_vision.go inflating: ollama-0.12.5/model/models/llama4/process_image.go inflating: ollama-0.12.5/model/models/llama4/process_image_test.go creating: ollama-0.12.5/model/models/mistral3/ inflating: ollama-0.12.5/model/models/mistral3/imageproc.go inflating: ollama-0.12.5/model/models/mistral3/model.go inflating: ollama-0.12.5/model/models/mistral3/model_text.go inflating: ollama-0.12.5/model/models/mistral3/model_vision.go creating: ollama-0.12.5/model/models/mllama/ inflating: ollama-0.12.5/model/models/mllama/model.go inflating: ollama-0.12.5/model/models/mllama/model_text.go inflating: ollama-0.12.5/model/models/mllama/model_vision.go inflating: ollama-0.12.5/model/models/mllama/process_image.go inflating: ollama-0.12.5/model/models/mllama/process_image_test.go inflating: ollama-0.12.5/model/models/models.go creating: ollama-0.12.5/model/models/qwen2/ inflating: ollama-0.12.5/model/models/qwen2/model.go creating: ollama-0.12.5/model/models/qwen25vl/ inflating: ollama-0.12.5/model/models/qwen25vl/model.go inflating: ollama-0.12.5/model/models/qwen25vl/model_text.go inflating: ollama-0.12.5/model/models/qwen25vl/model_vision.go inflating: ollama-0.12.5/model/models/qwen25vl/process_image.go creating: ollama-0.12.5/model/models/qwen3/ inflating: ollama-0.12.5/model/models/qwen3/embed.go inflating: ollama-0.12.5/model/models/qwen3/model.go creating: ollama-0.12.5/model/parsers/ inflating: ollama-0.12.5/model/parsers/parsers.go inflating: ollama-0.12.5/model/parsers/qwen3coder.go inflating: ollama-0.12.5/model/parsers/qwen3coder_test.go creating: ollama-0.12.5/model/renderers/ inflating: ollama-0.12.5/model/renderers/qwen3coder.go inflating: ollama-0.12.5/model/renderers/qwen3coder_test.go inflating: ollama-0.12.5/model/renderers/renderer.go inflating: ollama-0.12.5/model/sentencepiece.go inflating: ollama-0.12.5/model/sentencepiece_test.go creating: ollama-0.12.5/model/testdata/ creating: ollama-0.12.5/model/testdata/gemma2/ inflating: ollama-0.12.5/model/testdata/gemma2/tokenizer.model creating: ollama-0.12.5/model/testdata/llama3.2/ inflating: ollama-0.12.5/model/testdata/llama3.2/encoder.json inflating: ollama-0.12.5/model/testdata/llama3.2/vocab.bpe inflating: ollama-0.12.5/model/testdata/war-and-peace.txt inflating: ollama-0.12.5/model/textprocessor.go inflating: ollama-0.12.5/model/vocabulary.go inflating: ollama-0.12.5/model/vocabulary_test.go inflating: ollama-0.12.5/model/wordpiece.go inflating: ollama-0.12.5/model/wordpiece_test.go creating: ollama-0.12.5/openai/ inflating: ollama-0.12.5/openai/openai.go inflating: ollama-0.12.5/openai/openai_test.go creating: ollama-0.12.5/parser/ inflating: ollama-0.12.5/parser/expandpath_test.go inflating: ollama-0.12.5/parser/parser.go inflating: ollama-0.12.5/parser/parser_test.go creating: ollama-0.12.5/progress/ inflating: ollama-0.12.5/progress/bar.go inflating: ollama-0.12.5/progress/progress.go inflating: ollama-0.12.5/progress/spinner.go creating: ollama-0.12.5/readline/ inflating: ollama-0.12.5/readline/buffer.go inflating: ollama-0.12.5/readline/errors.go inflating: ollama-0.12.5/readline/history.go inflating: ollama-0.12.5/readline/readline.go inflating: ollama-0.12.5/readline/readline_unix.go inflating: ollama-0.12.5/readline/readline_windows.go inflating: ollama-0.12.5/readline/term.go inflating: ollama-0.12.5/readline/term_bsd.go inflating: ollama-0.12.5/readline/term_linux.go inflating: ollama-0.12.5/readline/term_windows.go inflating: ollama-0.12.5/readline/types.go creating: ollama-0.12.5/runner/ inflating: ollama-0.12.5/runner/README.md creating: ollama-0.12.5/runner/common/ inflating: ollama-0.12.5/runner/common/stop.go inflating: ollama-0.12.5/runner/common/stop_test.go creating: ollama-0.12.5/runner/llamarunner/ inflating: ollama-0.12.5/runner/llamarunner/cache.go inflating: ollama-0.12.5/runner/llamarunner/cache_test.go inflating: ollama-0.12.5/runner/llamarunner/image.go inflating: ollama-0.12.5/runner/llamarunner/image_test.go inflating: ollama-0.12.5/runner/llamarunner/runner.go creating: ollama-0.12.5/runner/ollamarunner/ inflating: ollama-0.12.5/runner/ollamarunner/cache.go inflating: ollama-0.12.5/runner/ollamarunner/cache_test.go inflating: ollama-0.12.5/runner/ollamarunner/multimodal.go inflating: ollama-0.12.5/runner/ollamarunner/runner.go inflating: ollama-0.12.5/runner/runner.go creating: ollama-0.12.5/sample/ inflating: ollama-0.12.5/sample/samplers.go inflating: ollama-0.12.5/sample/samplers_benchmark_test.go inflating: ollama-0.12.5/sample/samplers_test.go inflating: ollama-0.12.5/sample/transforms.go inflating: ollama-0.12.5/sample/transforms_test.go creating: ollama-0.12.5/scripts/ inflating: ollama-0.12.5/scripts/build_darwin.sh inflating: ollama-0.12.5/scripts/build_docker.sh inflating: ollama-0.12.5/scripts/build_linux.sh inflating: ollama-0.12.5/scripts/build_windows.ps1 inflating: ollama-0.12.5/scripts/env.sh inflating: ollama-0.12.5/scripts/install.sh inflating: ollama-0.12.5/scripts/push_docker.sh inflating: ollama-0.12.5/scripts/tag_latest.sh creating: ollama-0.12.5/server/ inflating: ollama-0.12.5/server/auth.go inflating: ollama-0.12.5/server/create.go inflating: ollama-0.12.5/server/create_test.go inflating: ollama-0.12.5/server/download.go inflating: ollama-0.12.5/server/fixblobs.go inflating: ollama-0.12.5/server/fixblobs_test.go inflating: ollama-0.12.5/server/images.go inflating: ollama-0.12.5/server/images_test.go creating: ollama-0.12.5/server/internal/ creating: ollama-0.12.5/server/internal/cache/ creating: ollama-0.12.5/server/internal/cache/blob/ inflating: ollama-0.12.5/server/internal/cache/blob/cache.go inflating: ollama-0.12.5/server/internal/cache/blob/cache_test.go inflating: ollama-0.12.5/server/internal/cache/blob/casecheck_test.go inflating: ollama-0.12.5/server/internal/cache/blob/chunked.go inflating: ollama-0.12.5/server/internal/cache/blob/digest.go inflating: ollama-0.12.5/server/internal/cache/blob/digest_test.go creating: ollama-0.12.5/server/internal/client/ creating: ollama-0.12.5/server/internal/client/ollama/ inflating: ollama-0.12.5/server/internal/client/ollama/registry.go inflating: ollama-0.12.5/server/internal/client/ollama/registry_synctest_test.go inflating: ollama-0.12.5/server/internal/client/ollama/registry_test.go inflating: ollama-0.12.5/server/internal/client/ollama/trace.go creating: ollama-0.12.5/server/internal/internal/ creating: ollama-0.12.5/server/internal/internal/backoff/ inflating: ollama-0.12.5/server/internal/internal/backoff/backoff.go inflating: ollama-0.12.5/server/internal/internal/backoff/backoff_synctest_test.go inflating: ollama-0.12.5/server/internal/internal/backoff/backoff_test.go creating: ollama-0.12.5/server/internal/internal/names/ inflating: ollama-0.12.5/server/internal/internal/names/name.go inflating: ollama-0.12.5/server/internal/internal/names/name_test.go creating: ollama-0.12.5/server/internal/internal/stringsx/ inflating: ollama-0.12.5/server/internal/internal/stringsx/stringsx.go inflating: ollama-0.12.5/server/internal/internal/stringsx/stringsx_test.go creating: ollama-0.12.5/server/internal/internal/syncs/ inflating: ollama-0.12.5/server/internal/internal/syncs/line.go inflating: ollama-0.12.5/server/internal/internal/syncs/line_test.go inflating: ollama-0.12.5/server/internal/internal/syncs/syncs.go creating: ollama-0.12.5/server/internal/manifest/ inflating: ollama-0.12.5/server/internal/manifest/manifest.go creating: ollama-0.12.5/server/internal/registry/ inflating: ollama-0.12.5/server/internal/registry/server.go inflating: ollama-0.12.5/server/internal/registry/server_test.go creating: ollama-0.12.5/server/internal/registry/testdata/ creating: ollama-0.12.5/server/internal/registry/testdata/models/ creating: ollama-0.12.5/server/internal/registry/testdata/models/blobs/ inflating: ollama-0.12.5/server/internal/registry/testdata/models/blobs/sha256-a4e5e156ddec27e286f75328784d7106b60a4eb1d246e950a001a3f944fbda99 inflating: ollama-0.12.5/server/internal/registry/testdata/models/blobs/sha256-ecfb1acfca9c76444d622fcdc3840217bd502124a9d3687d438c19b3cb9c3cb1 creating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/ creating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/example.com/ creating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/example.com/library/ creating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/example.com/library/smol/ inflating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/example.com/library/smol/latest inflating: ollama-0.12.5/server/internal/registry/testdata/registry.txt creating: ollama-0.12.5/server/internal/testutil/ inflating: ollama-0.12.5/server/internal/testutil/testutil.go inflating: ollama-0.12.5/server/layer.go inflating: ollama-0.12.5/server/manifest.go inflating: ollama-0.12.5/server/manifest_test.go inflating: ollama-0.12.5/server/model.go inflating: ollama-0.12.5/server/modelpath.go inflating: ollama-0.12.5/server/modelpath_test.go inflating: ollama-0.12.5/server/prompt.go inflating: ollama-0.12.5/server/prompt_test.go inflating: ollama-0.12.5/server/quantization.go inflating: ollama-0.12.5/server/quantization_test.go inflating: ollama-0.12.5/server/routes.go inflating: ollama-0.12.5/server/routes_create_test.go inflating: ollama-0.12.5/server/routes_debug_test.go inflating: ollama-0.12.5/server/routes_delete_test.go inflating: ollama-0.12.5/server/routes_generate_test.go inflating: ollama-0.12.5/server/routes_harmony_streaming_test.go inflating: ollama-0.12.5/server/routes_list_test.go inflating: ollama-0.12.5/server/routes_test.go inflating: ollama-0.12.5/server/sched.go inflating: ollama-0.12.5/server/sched_test.go extracting: ollama-0.12.5/server/sparse_common.go inflating: ollama-0.12.5/server/sparse_windows.go inflating: ollama-0.12.5/server/upload.go creating: ollama-0.12.5/template/ inflating: ollama-0.12.5/template/alfred.gotmpl inflating: ollama-0.12.5/template/alfred.json inflating: ollama-0.12.5/template/alpaca.gotmpl inflating: ollama-0.12.5/template/alpaca.json inflating: ollama-0.12.5/template/chatml.gotmpl inflating: ollama-0.12.5/template/chatml.json inflating: ollama-0.12.5/template/chatqa.gotmpl inflating: ollama-0.12.5/template/chatqa.json inflating: ollama-0.12.5/template/codellama-70b-instruct.gotmpl inflating: ollama-0.12.5/template/codellama-70b-instruct.json inflating: ollama-0.12.5/template/command-r.gotmpl inflating: ollama-0.12.5/template/command-r.json inflating: ollama-0.12.5/template/falcon-instruct.gotmpl inflating: ollama-0.12.5/template/falcon-instruct.json inflating: ollama-0.12.5/template/gemma-instruct.gotmpl inflating: ollama-0.12.5/template/gemma-instruct.json inflating: ollama-0.12.5/template/gemma3-instruct.gotmpl inflating: ollama-0.12.5/template/gemma3-instruct.json inflating: ollama-0.12.5/template/granite-instruct.gotmpl inflating: ollama-0.12.5/template/granite-instruct.json inflating: ollama-0.12.5/template/index.json inflating: ollama-0.12.5/template/llama2-chat.gotmpl inflating: ollama-0.12.5/template/llama2-chat.json inflating: ollama-0.12.5/template/llama3-instruct.gotmpl inflating: ollama-0.12.5/template/llama3-instruct.json inflating: ollama-0.12.5/template/magicoder.gotmpl inflating: ollama-0.12.5/template/magicoder.json inflating: ollama-0.12.5/template/mistral-instruct.gotmpl inflating: ollama-0.12.5/template/mistral-instruct.json inflating: ollama-0.12.5/template/openchat.gotmpl inflating: ollama-0.12.5/template/openchat.json inflating: ollama-0.12.5/template/phi-3.gotmpl inflating: ollama-0.12.5/template/phi-3.json inflating: ollama-0.12.5/template/solar-instruct.gotmpl inflating: ollama-0.12.5/template/solar-instruct.json inflating: ollama-0.12.5/template/starcoder2-instruct.gotmpl inflating: ollama-0.12.5/template/starcoder2-instruct.json inflating: ollama-0.12.5/template/template.go inflating: ollama-0.12.5/template/template_test.go creating: ollama-0.12.5/template/testdata/ creating: ollama-0.12.5/template/testdata/alfred.gotmpl/ inflating: ollama-0.12.5/template/testdata/alfred.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/alfred.gotmpl/user inflating: ollama-0.12.5/template/testdata/alfred.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/alpaca.gotmpl/ inflating: ollama-0.12.5/template/testdata/alpaca.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/alpaca.gotmpl/user inflating: ollama-0.12.5/template/testdata/alpaca.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/chatml.gotmpl/ inflating: ollama-0.12.5/template/testdata/chatml.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/chatml.gotmpl/user inflating: ollama-0.12.5/template/testdata/chatml.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/chatqa.gotmpl/ inflating: ollama-0.12.5/template/testdata/chatqa.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/chatqa.gotmpl/user inflating: ollama-0.12.5/template/testdata/chatqa.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/codellama-70b-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/codellama-70b-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/codellama-70b-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/codellama-70b-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/command-r.gotmpl/ inflating: ollama-0.12.5/template/testdata/command-r.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/command-r.gotmpl/user inflating: ollama-0.12.5/template/testdata/command-r.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/falcon-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/falcon-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/falcon-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/falcon-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/gemma-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/gemma-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/gemma-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/gemma-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/gemma3-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/gemma3-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/gemma3-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/gemma3-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/granite-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/granite-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/granite-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/granite-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/llama2-chat.gotmpl/ inflating: ollama-0.12.5/template/testdata/llama2-chat.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/llama2-chat.gotmpl/user inflating: ollama-0.12.5/template/testdata/llama2-chat.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/llama3-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/llama3-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/llama3-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/llama3-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/magicoder.gotmpl/ inflating: ollama-0.12.5/template/testdata/magicoder.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/magicoder.gotmpl/user inflating: ollama-0.12.5/template/testdata/magicoder.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/mistral-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/mistral-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/mistral-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/mistral-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/openchat.gotmpl/ inflating: ollama-0.12.5/template/testdata/openchat.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/openchat.gotmpl/user inflating: ollama-0.12.5/template/testdata/openchat.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/phi-3.gotmpl/ inflating: ollama-0.12.5/template/testdata/phi-3.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/phi-3.gotmpl/user inflating: ollama-0.12.5/template/testdata/phi-3.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/solar-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/solar-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/solar-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/solar-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/starcoder2-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/starcoder2-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/starcoder2-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/starcoder2-instruct.gotmpl/user-assistant-user inflating: ollama-0.12.5/template/testdata/templates.jsonl creating: ollama-0.12.5/template/testdata/vicuna.gotmpl/ inflating: ollama-0.12.5/template/testdata/vicuna.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/vicuna.gotmpl/user inflating: ollama-0.12.5/template/testdata/vicuna.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/zephyr.gotmpl/ inflating: ollama-0.12.5/template/testdata/zephyr.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/zephyr.gotmpl/user inflating: ollama-0.12.5/template/testdata/zephyr.gotmpl/user-assistant-user inflating: ollama-0.12.5/template/vicuna.gotmpl inflating: ollama-0.12.5/template/vicuna.json inflating: ollama-0.12.5/template/zephyr.gotmpl inflating: ollama-0.12.5/template/zephyr.json creating: ollama-0.12.5/thinking/ inflating: ollama-0.12.5/thinking/parser.go inflating: ollama-0.12.5/thinking/parser_test.go inflating: ollama-0.12.5/thinking/template.go inflating: ollama-0.12.5/thinking/template_test.go creating: ollama-0.12.5/tools/ inflating: ollama-0.12.5/tools/template.go inflating: ollama-0.12.5/tools/template_test.go inflating: ollama-0.12.5/tools/tools.go inflating: ollama-0.12.5/tools/tools_test.go creating: ollama-0.12.5/types/ creating: ollama-0.12.5/types/errtypes/ inflating: ollama-0.12.5/types/errtypes/errtypes.go creating: ollama-0.12.5/types/model/ inflating: ollama-0.12.5/types/model/capability.go inflating: ollama-0.12.5/types/model/name.go inflating: ollama-0.12.5/types/model/name_test.go creating: ollama-0.12.5/types/model/testdata/ creating: ollama-0.12.5/types/model/testdata/fuzz/ creating: ollama-0.12.5/types/model/testdata/fuzz/FuzzName/ extracting: ollama-0.12.5/types/model/testdata/fuzz/FuzzName/d37463aa416f6bab creating: ollama-0.12.5/types/syncmap/ inflating: ollama-0.12.5/types/syncmap/syncmap.go creating: ollama-0.12.5/version/ inflating: ollama-0.12.5/version/version.go + STATUS=0 + '[' 0 -ne 0 ']' + cd ollama-0.12.5 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + cd /builddir/build/BUILD/ollama-0.12.5-build + cd ollama-0.12.5 + /usr/lib/rpm/rpmuncompress -x -v /builddir/build/SOURCES/main.zip TZ=UTC /usr/bin/unzip -u '/builddir/build/SOURCES/main.zip' Archive: /builddir/build/SOURCES/main.zip 8b2d93346864502581d28f0b769fba147b7a8cf5 creating: ollamad-main/ inflating: ollamad-main/README.md inflating: ollamad-main/ollamad.conf inflating: ollamad-main/ollamad.service creating: ollamad-main/packaging/ inflating: ollamad-main/packaging/ollama.spec + STATUS=0 + '[' 0 -ne 0 ']' + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.eqMXMQ + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + cd ollama-0.12.5 + export PATH=/usr/local/cuda/bin:/usr/lib64/ccache:/usr/bin:/bin:/usr/sbin:/sbin + PATH=/usr/local/cuda/bin:/usr/lib64/ccache:/usr/bin:/bin:/usr/sbin:/sbin + export LD_LIBRARY_PATH=:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64 + LD_LIBRARY_PATH=:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64 + export CUDACXX=/usr/local/cuda/bin/nvcc + CUDACXX=/usr/local/cuda/bin/nvcc + /usr/local/cuda/bin/nvcc --help Usage : nvcc [options] Options for specifying the compilation phase ============================================ More exactly, this option specifies up to which stage the input files must be compiled, according to the following compilation trajectories for different input file types: .c/.cc/.cpp/.cxx : preprocess, compile, link .o : link .i/.ii : compile, link .cu : preprocess, cuda frontend, PTX assemble, merge with host C code, compile, link .gpu : cicc compile into cubin .ptx : PTX assemble into cubin. --cuda (-cuda) Compile all .cu input files to .cu.cpp.ii output. --cubin (-cubin) Compile all .cu/.gpu/.ptx input files to device-only .cubin files. This step discards the host code for each .cu input file. --fatbin (-fatbin) Compile all .cu/.gpu/.ptx/.cubin input files to device-only .fatbin files. This step discards the host code for each .cu input file. --ptx (-ptx) Compile all .cu input files to device-only .ptx files. This step discards the host code for each of these input file. --optix-ir (-optix-ir) Compile CUDA source to OptiX IR (.optixir) output. The OptiX IR is only intended for consumption by OptiX through appropriate APIs. This feature is not supported with link-time-optimization (-dlto), the lto_NN -arch target, or with -gencode. --ltoir (-ltoir) Compile CUDA source to device-only .ltoir file. This feature is only supported with link-time-optimization (-dlto). This step discards the host code for each .cu input file. --preprocess (-E) Preprocess all .c/.cc/.cpp/.cxx/.cu input files. --generate-dependencies (-M) Generate a dependency file that can be included in a make file for the .c/.cc/.cpp/.cxx/.cu input file. If -MF is specified, multiple source files are not supported, and the output is written to the specified file, otherwise it is written to stdout. --generate-nonsystem-dependencies (-MM) Same as --generate-dependencies but skip header files found in system directories (Linux only). --generate-dependencies-with-compile (-MD) Generate a dependency file and compile the input file. The dependency file can be included in a make file for the .c/.cc/.cpp/.cxx/.cu input file. This option cannot be specified together with -E. The dependency file name is computed as follows: - If -MF is specified, then the specified file is used as the dependency file name - If -o is specified, the dependency file name is computed from the specified file name by replacing the suffix with '.d'. - Otherwise, the dependency file name is computed by replacing the input file names's suffix with '.d' If the dependency file name is computed based on either -MF or -o, then multiple input files are not supported. --generate-nonsystem-dependencies-with-compile (-MMD) Same as --generate-dependencies-with-compile, but skip header files found in system directories (Linux only). --dependency-output (-MF) Specify the output file for the dependency file generated with -M/-MM/-MD/-MMD. --generate-dependency-targets (-MP) Add an empty target for each dependency. --compile (-c) Compile each .c/.cc/.cpp/.cxx/.cu input file into an object file. --device-c (-dc) Compile each .c/.cc/.cpp/.cxx/.cu input file into an object file that contains relocatable device code. It is equivalent to '--relocatable-device-code=true --compile'. --device-w (-dw) Compile each .c/.cc/.cpp/.cxx/.cu input file into an object file that contains executable device code. It is equivalent to '--relocatable-device-code=false --compile'. --device-link (-dlink) Link object files with relocatable device code and .ptx/.cubin/.fatbin files into an object file with executable device code, which can be passed to the host linker. --link (-link) This option specifies the default behavior: compile and link all inputs. --lib (-lib) Compile all inputs into object files (if necessary) and add the results to the specified output library file. --run (-run) This option compiles and links all inputs into an executable, and executes it. Or, when the input is a single executable, it is executed without any compilation or linking. This step is intended for developers who do not want to be bothered with setting the necessary environment variables; these are set temporarily by nvcc). File and path specifications. ============================= --output-file (-o) Specify name and location of the output file. Only a single input file is allowed when this option is present in nvcc non-linking/archiving mode. --pre-include ,... (-include) Specify header files that must be preincluded during preprocessing. --objdir-as-tempdir (-objtemp) Create intermediate files in the same directory as the object file instead of in the temporary directory. This option will take effect only if -c, -dc or -dw is also used. --library ,... (-l) Specify libraries to be used in the linking stage without the library file extension. The libraries are searched for on the library search paths that have been specified using option '--library-path'. --define-macro ,... (-D) Specify macro definitions to define for use during preprocessing or compilation. --undefine-macro ,... (-U) Undefine macro definitions during preprocessing or compilation. --include-path ,... (-I) Specify include search paths. --system-include ,... (-isystem) Specify system include search paths. --library-path ,... (-L) Specify library search paths. --output-directory (-odir) Specify the directory of the output file. This option is intended for letting the dependency generation step (see '--generate-dependencies') generate a rule that defines the target object file in the proper directory. --compiler-bindir (-ccbin) Specify the directory in which the host compiler executable resides. The host compiler executable name can be also specified to ensure that the correct host compiler is selected. In addition, driver prefix options ('--input-drive-prefix', '--dependency-drive-prefix', or '--drive-prefix') may need to be specified, if nvcc is executed in a Cygwin shell or a MinGW shell on Windows. --allow-unsupported-compiler (-allow-unsupported-compiler) Disable nvcc check for supported host compiler versions. Using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. This option has no effect on MacOS. --archiver-binary (-arbin) Specify the path of the executable for the archiving tool used to createstatic libraries with '--lib'. If unspecified, a platform-specific defaultis used. --cudart {none|shared|static} (-cudart) Specify the type of CUDA runtime library to be used: no CUDA runtime library, shared/dynamic CUDA runtime library, or static CUDA runtime library. Allowed values for this option: 'none','shared','static'. Default value: 'static'. --cudadevrt {none|static} (-cudadevrt) Specify the type of CUDA device runtime library to be used: no CUDA device runtime library, or static CUDA device runtime library. Allowed values for this option: 'none','static'. Default value: 'static'. --libdevice-directory (-ldir) Specify the directory that contains the libdevice library files when option '--dont-use-profile' is used. Libdevice library files are located in the 'nvvm/libdevice' directory in the CUDA toolkit. --target-directory (-target-dir) Specify the subfolder name in the targets directory where the default include and library paths are located. --use-local-env (-use-local-env) Use this flag to force nvcc to assume that the environment for cl.exe has already been set up, and skip running the batch file from the MSVC installation that sets up the environment for cl.exe. This can significantly reduce overall compile time for small programs. --force-cl-env-setup (-force-cl-env-setup) Force nvcc to always run the batch file from the MSVC installation to set up the environment for cl.exe(matching legacy nvcc behavior). If this flag is not specified, by default, nvcc will skip running the batch file if the following conditions are satisfied : cl.exe is in the PATH, environment variable VSCMD_VER is set, and, if -ccbin is specified, then compiler denoted by -ccbin matches the cl.exe in the PATH. Skipping the batch file execution can reduce overall compile time significantly for small programs. Options for specifying behavior of compiler/linker. =================================================== --profile (-pg) Instrument generated code/executable for use by gprof (Linux only). --debug (-g) Generate debug information for host code. --device-debug (-G) Generate debug information for device code. If --dopt is not specified, then turns off all optimizations. Don't use for profiling; use -lineinfo instead. --generate-line-info (-lineinfo) Generate line-number information for device code. --optimization-info ,... (-opt-info) Provide optimization reports for the specified kind of optimization. The following tags are supported: inline: Emit remarks related to function inlining. Inlining passmay be invoked multiple times by the compiler and a function notinlined in an earlier pass may be inlined in a subsequent pass. Allowed values for this option: 'inline'. --optimize (-O) Specify optimization level for host code. --Ofast-compile (-Ofc) Specify fast-compile level for device code, which controls the tradeoff between compilation speed and runtime performance by disabling certain optimizations at varying levels. Levels: 'max': Focus only on the fastest compilation speed, disabling many optimizations. 'mid': Balance compile time and runtime, disabling expensive optimizations. 'min': More minimal impact on both compile time and runtime, minimizing some expensive optimizations. '0': Disables fast-compile. The option is disabled by default. Allowed values for this option: '0','max','mid','min'. --dopt (-dopt) Enable device code optimization. When specified along with '-G', enables limited debug information generation for optimized device code (currently, only line number information). When '-G' is not specified, '-dopt=on' is implicit. Allowed values for this option: 'on'. --dlink-time-opt (-dlto) Perform link-time optimization of device code. Link-time optimization must be specified at both compile and link time; at compile time it stores high-level intermediate code, then at link time it links together and optimizes the intermediate code.If that intermediate is not found at link time then nothing happens. Intermediate code is also stored at compile time with the --gpu-code='lto_NN' target. The options -dlto -arch=sm_NN will add a lto_NN target; if you want to only add a lto_NN target and not the compute_NN that -arch=sm_NN usually generates, use -arch=lto_NN. The options '-dlto -dlink -ptx -o ' will cause nvlink to generate . If -o is not used, the file generated will be a_dlink.dlto.ptx. --lto (-lto) Alias for -dlto. --gen-opt-lto (-gen-opt-lto) Run the optimizer passes before generating the LTO IR. --ftemplate-backtrace-limit (-ftemplate-backtrace-limit) Set the maximum number of template instantiation notes for a single warning or error to . A value of 0 is allowed, and indicates that no limit should be enforced. This value is also passed to the host compiler if it provides an equivalent flag. --ftemplate-depth (-ftemplate-depth) Set the maximum instantiation depth for template classes to . This value is also passed to the host compiler if it provides an equivalent flag. --no-exceptions (-noeh) Disable exception handling for host code. --shared (-shared) Generate a shared library during linking. Use option '--linker-options' when other linker options are required for more control. --x {c|c++|cu} (-x) Explicitly specify the language for the input files, rather than letting the compiler choose a default based on the file name suffix. Allowed values for this option: 'c','c++','cu'. --std {c++03|c++11|c++14|c++17|c++20} (-std) Select a particular C++ dialect. Note that this flag also turns on the corresponding dialect flag for the host compiler. Allowed values for this option: 'c++03','c++11','c++14','c++17','c++20'. --no-host-device-initializer-list (-nohdinitlist) Do not implicitly consider member functions of std::initializer_list as __host__ __device__ functions. --no-host-device-move-forward (-nohdmoveforward) Do not implicitly consider std::move and std::forward as __host__ __device__ function templates. --expt-relaxed-constexpr (-expt-relaxed-constexpr) Experimental flag: Allow host code to invoke __device__ constexpr functions, and device code to invoke __host__ constexpr functions.Note that the behavior of this flag may change in future compiler releases. --extended-lambda (-extended-lambda) Allow __host__, __device__ annotations in lambda declaration. --expt-extended-lambda (-expt-extended-lambda) Alias for -extended-lambda. --machine {64} (-m) Specify 64 bit architecture. Allowed values for this option: 64. Default value: 64. --m64 (-m64) Equivalent to --machine=64. Options for passing specific phase options ========================================== These allow for passing options directly to the intended compilation phase. Using these, users have the ability to pass options to the lower level compilation tools, without the need for nvcc to know about each and every such option. --compiler-options ,... (-Xcompiler) Specify options directly to the compiler/preprocessor. --linker-options ,... (-Xlinker) Specify options directly to the host linker. --archive-options ,... (-Xarchive) Specify options directly to library manager. --ptxas-options ,... (-Xptxas) Specify options directly to ptxas, the PTX optimizing assembler. --nvlink-options ,... (-Xnvlink) Specify options directly to nvlink. Miscellaneous options for guiding the compiler driver. ====================================================== --static-global-template-stub {true|false} (-static-global-template-stub) In whole-program compilation mode (-rdc=false), force 'static' linkage for host side stub functions generated for __global__ function templates. When this flag is false, a __global__ template stub function in the C++ code sent to the host compiler will have weak linkage. This can cause a correctness problem if template instantiations of the __global__ template are present in multiple translation units, since the host linker will chose only one of the stub instantiations in the linked host program, but the device code in each translation unit forms a separate device program with a distinct version of the __global__ template instantiation. Thus, launching a __global__ template instantiation from one translation unit may unexpectedly launch the kernel in a a device program from a different translation unit that contains the same __global__ function template instantiation. Please see nvcc documentation for details. Setting this flag to true avoids the problem, since the CUDA compiler will force 'static' linkage for the __global__ template stub function, and so the __global__ template stub will be unique across multiple translation units. Note: -This option is ignored unless the program is being compiled in whole program compilation mode (-rdc=false). -Turning on this flag may break existing code in some corner cases (only in whole program compilation mode): a. If a __global__ function template is declared as a friend, and the friend declaration is the first declaration of the entity. b. If a __global__ function template is referenced, but not defined in the current translation unit. Default value: true. --device-entity-has-hidden-visibility {true|false} (-device-entity-has-hidden-visibility) This flag applies to __global__ functions and function templates, and to __constant__ , __device__ and __managed__ variables and variable templates, when using host compilers that support the 'visibility' attribute (e.g. gcc, clang). When this flag is enabled, the CUDA frontend compiler will implicitly add '__attribute__((visibility("hidden")))' to every declaration of these entities, unless the entity has internal linkage or the entity has non-default visibility e.g., due to 'attribute((visibility("default")))' on an enclosing namespace. If building a shared library, entities with 'hidden' visibility cannot be referenced from outside the shared library. This behavior is desired for __global__ functions/template instantiations and for __constant__/__device__/__managed__ variables and template instantiations, because the functionality of these entities depends on the CUDA Runtime (CUDART) library. If such entities are referenced from outside the shared library, then subtle errors can occur if a different CUDART is linked in to the shared library versus the user of the shared library. By forcing 'hidden' visibility for such entities, these problems are avoided (the program will fail to link). Please also see related flag '-static-global-template-stub', which forces internal linkage for __global__ templates in whole program compilation mode. Default value: true. --forward-unknown-to-host-compiler (-forward-unknown-to-host-compiler) Forward unknown options to the host compiler. An 'unknown option' is a command line argument that starts with '-' followed by another character, and is not a recognized nvcc flag or an argument for a recognized nvcc flag. Note: If the unknown option is followed by a separate command line argument, the argument will not be forwarded, unless it begins with the '-' character. E.g. 'nvcc -forward-unknown-to-host-compiler -foo=bar a.cu' will forward '-foo=bar' to host compiler. 'nvcc -forward-unknown-to-host-compiler -foo bar a.cu' will report an error for 'bar' argument. 'nvcc -forward-unknown-to-host-compiler -foo -bar a.cu' will forward '-foo' and '-bar' to host compiler. Note: On Windows, also see option '-forward-slash-prefix-opts' for forwarding options that begin with '/'. --forward-unknown-to-host-linker (-forward-unknown-to-host-linker) Forward unknown options to the host linker. An 'unknown option' is a command line argument that starts with '-' followed by another character, and is not a recognized nvcc flag or an argument for a recognized nvcc flag. Note: If the unknown option is followed by a separate command line argument, the argument will not be forwarded, unless it begins with the '-' character. E.g. 'nvcc -forward-unknown-to-host-linker -foo=bar a.cu' will forward '-foo=bar' to host linker. 'nvcc -forward-unknown-to-host-linker -foo bar a.cu' will report an error for 'bar' argument. 'nvcc -forward-unknown-to-host-linker -foo -bar a.cu' will forward '-foo' and '-bar' to host linker. Note: On Windows, also see option '-forward-slash-prefix-opts' for forwarding options that begin with '/'. --forward-unknown-opts (-forward-unknown-opts) Implies the combination of options: -forward-unknown-to-host-linker and -forward-unknown-to-host-compiler. E.g. 'nvcc -forward-unknown-opts -foo=bar a.cu' will forward '-foo=bar' to the host linker and compiler. 'nvcc -forward-unknown-opts -foo bar a.cu' will report an error for 'bar' argument. 'nvcc -forward-unknown-opts -foo -bar a.cu' will forward '-foo' and '-bar' to the host linker and compiler. Note: On Windows, also see option '-forward-slash-prefix-opts' for forwarding options that begin with '/'. --dont-use-profile (-noprof) Nvcc uses the nvcc.profiles file for compilation. When specifying this option, the profile file is not used. --dryrun (-dryrun) Do not execute the compilation commands generated by nvcc. Instead, list them. --verbose (-v) List the compilation commands generated by this compiler driver, but do not suppress their execution. --threads (-t) Specify the maximum number of threads to be created in parallel when compiling for multiple architectures. If is 1 or if compiling for one architecture, this option is ignored. If is 0, the number of threads will be the number of CPUs on the machine. --split-compile (-split-compile) Specify the maximum amount of concurrent threads to to be utilized when running compiler optimizations. If is 1, this option is ignored. If is 0, the number of threads will be the number of CPUs on the machine. This option will have minimal (if any) impact on performance of the compiled binary. --split-compile-extended (-split-compile-extended) Specify the maximum amount of concurrent threads to be utilized when running compiler optimizations in LTO mode. If is 1, this option is ignored. If is 0, the number of threads will be the number of CPUs on the machine. This option is a more aggressive form of split compilation, and can potentially impact performance of the compiled binary. It is available in LTO mode only. --fdevice-syntax-only (-fdevice-syntax-only) Ends device compilation after front-end syntax checking. This option does not generate valid device code. --fdevice-time-trace (-fdevice-time-trace) Enables the time profiler, outputting a JSON file based on given . If is '-', the JSON file will have the same name as the user provided output file (-o), otherwise it will be set to 'trace.json'. Results can be analyzed on chrome://tracing for a flamegraph visualization. --keep (-keep) Keep all intermediate files that are generated during internal compilation steps. --keep-dir (-keep-dir) Keep all intermediate files that are generated during internal compilation steps in this directory. --save-temps (-save-temps) This option is an alias of '--keep'. --clean-targets (-clean) This option reverses the behavior of nvcc. When specified, none of the compilation phases will be executed. Instead, all of the non-temporary files that nvcc would otherwise create will be deleted. --time (-time) Generate a comma separated value table with the time taken by each compilation phase, and append it at the end of the file given as the option argument. If the file is empty, the column headings are generated in the first row of the table. If the file name is '-', the timing data is generated in stdout. --run-args ,... (-run-args) Used in combination with option --run to specify command line arguments for the executable. --input-drive-prefix (-idp) On Windows, all command line arguments that refer to file names must be converted to the Windows native format before they are passed to pure Windows executables. This option specifies how the current development environment represents absolute paths. Use '/cygwin/' as for Cygwin build environments, and '/' as for MinGW. --dependency-drive-prefix (-ddp) On Windows, when generating dependency files (see --generate-dependencies), all file names must be converted appropriately for the instance of 'make' that is used. Some instances of 'make' have trouble with the colon in absolute paths in the native Windows format, which depends on the environment in which the 'make' instance has been compiled. Use '/cygwin/' as for a Cygwin make, and '/' as for MinGW. Or leave these file names in the native Windows format by specifying nothing. --drive-prefix (-dp) Specifies as both --input-drive-prefix and --dependency-drive-prefix. --dependency-target-name (-MT) Specify the target name of the generated rule when generating a dependency file (see '--generate-dependencies'). --no-align-double --no-align-double Specifies that '-malign-double' should not be passed as a compiler argument on 32-bit platforms. WARNING: this makes the ABI incompatible with the cuda's kernel ABI for certain 64-bit types. --no-device-link (-nodlink) Skip the device link step when linking object files. Options for steering GPU code generation. ========================================= --gpu-architecture (-arch) Specify the name of the class of NVIDIA 'virtual' GPU architecture for which the CUDA input files must be compiled. With the exception as described for the shorthand below, the architecture specified with this option must be a 'virtual' architecture (such as compute_100). Normally, this option alone does not trigger assembly of the generated PTX for a 'real' architecture (that is the role of nvcc option '--gpu-code', see below); rather, its purpose is to control preprocessing and compilation of the input to PTX. For convenience, in case of simple nvcc compilations, the following shorthand is supported. If no value for option '--gpu-code' is specified, then the value of this option defaults to the value of '--gpu-architecture'. In this situation, as only exception to the description above, the value specified for '--gpu-architecture' may be a 'real' architecture (such as a sm_100), in which case nvcc uses the specified 'real' architecture and its closest 'virtual' architecture as effective architecture values. For example, 'nvcc --gpu-architecture=sm_100' is equivalent to 'nvcc --gpu-architecture=compute_100 --gpu-code=sm_100,compute_100'. -arch=all build for all supported architectures (sm_*), and add PTX for the highest major architecture to the generated code. -arch=all-major build for just supported major versions (sm_*0), plus the earliest supported, and add PTX for the highest major architecture to the generated code. -arch=native build for all architectures (sm_*) on the current system Note: -arch=native, -arch=all, -arch=all-major cannot be used with the -code option, but can be used with -gencode options. Allowed values for this option: 'all','all-major','compute_100','compute_100a', 'compute_100f','compute_103','compute_103a','compute_103f','compute_110', 'compute_110a','compute_110f','compute_120','compute_120a','compute_120f', 'compute_121','compute_121a','compute_121f','compute_75','compute_80','compute_86', 'compute_87','compute_88','compute_89','compute_90','compute_90a','lto_100', 'lto_100a','lto_100f','lto_103','lto_103a','lto_103f','lto_110','lto_110a', 'lto_110f','lto_120','lto_120a','lto_120f','lto_121','lto_121a','lto_121f', 'lto_75','lto_80','lto_86','lto_87','lto_88','lto_89','lto_90','lto_90a', 'native','sm_100','sm_100a','sm_100f','sm_103','sm_103a','sm_103f','sm_110', 'sm_110a','sm_110f','sm_120','sm_120a','sm_120f','sm_121','sm_121a','sm_121f', 'sm_75','sm_80','sm_86','sm_87','sm_88','sm_89','sm_90','sm_90a'. --gpu-code ,... (-code) Specify the name of the NVIDIA GPU to assemble and optimize PTX for. nvcc embeds a compiled code image in the resulting executable for each specified architecture, which is a true binary load image for each 'real' architecture (such as sm_100), and PTX code for the 'virtual' architecture (such as compute_100). During runtime, such embedded PTX code is dynamically compiled by the CUDA runtime system if no binary load image is found for the 'current' GPU. Architectures specified for options '--gpu-architecture' and '--gpu-code' may be 'virtual' as well as 'real', but the architectures must be compatible with the architecture. When the '--gpu-code' option is used, the value for the '--gpu-architecture' option must be a 'virtual' PTX architecture. For instance, '--gpu-architecture=compute_100' is not compatible with '--gpu-code=sm_90', because the earlier compilation stages will assume the availability of 'compute_100' features that are not present on 'sm_90'. Allowed values for this option: 'compute_100','compute_100a','compute_100f', 'compute_103','compute_103a','compute_103f','compute_110','compute_110a', 'compute_110f','compute_120','compute_120a','compute_120f','compute_121', 'compute_121a','compute_121f','compute_75','compute_80','compute_86','compute_87', 'compute_88','compute_89','compute_90','compute_90a','lto_100','lto_100a', 'lto_100f','lto_103','lto_103a','lto_103f','lto_110','lto_110a','lto_110f', 'lto_120','lto_120a','lto_120f','lto_121','lto_121a','lto_121f','lto_75', 'lto_80','lto_86','lto_87','lto_88','lto_89','lto_90','lto_90a','sm_100', 'sm_100a','sm_100f','sm_103','sm_103a','sm_103f','sm_110','sm_110a','sm_110f', 'sm_120','sm_120a','sm_120f','sm_121','sm_121a','sm_121f','sm_75','sm_80', 'sm_86','sm_87','sm_88','sm_89','sm_90','sm_90a'. --generate-code ,... (-gencode) This option provides a generalization of the '--gpu-architecture= --gpu-code=, ...' option combination for specifying nvcc behavior with respect to code generation. Where use of the previous options generates code for different 'real' architectures with the PTX for the same 'virtual' architecture, option '--generate-code' allows multiple PTX generations for different 'virtual' architectures. In fact, '--gpu-architecture= --gpu-code=, ...' is equivalent to '--generate-code arch=,code=,...'. '--generate-code' options may be repeated for different virtual architectures. Allowed keywords for this option: 'arch','code'. --relocatable-device-code {true|false} (-rdc) Enable (disable) the generation of relocatable device code. If disabled, executable device code is generated. Relocatable device code must be linked before it can be executed. Default value: false. --entries entry,... (-e) Specify the global entry functions for which code must be generated. By default, code will be generated for all entry functions. --maxrregcount (-maxrregcount) Specify the maximum amount of registers that GPU functions can use. Until a function-specific limit, a higher value will generally increase the performance of individual GPU threads that execute this function. However, because thread registers are allocated from a global register pool on each GPU, a higher value of this option will also reduce the maximum thread block size, thereby reducing the amount of thread parallelism. Hence, a good maxrregcount value is the result of a trade-off. If this option is not specified, then no maximum is assumed. Value less than the minimum registers required by ABI will be bumped up by the compiler to ABI minimum limit. User program may not be able to make use of all registers as some registers are reserved by compiler. --use_fast_math (-use_fast_math) Make use of fast math library. '--use_fast_math' implies '--ftz=true --prec-div=false --prec-sqrt=false --fmad=true'. --ftz {true|false} (-ftz) This option controls single-precision denormals support. '--ftz=true' flushes denormal values to zero and '--ftz=false' preserves denormal values. '--use_fast_math' implies '--ftz=true'. Default value: false. --prec-div {true|false} (-prec-div) This option controls single-precision floating-point division and reciprocals. '--prec-div=true' enables the IEEE round-to-nearest mode and '--prec-div=false' enables the fast approximation mode. '--use_fast_math' implies '--prec-div=false'. Default value: true. --prec-sqrt {true|false} (-prec-sqrt) This option controls single-precision floating-point squre root. '--prec-sqrt=true' enables the IEEE round-to-nearest mode and '--prec-sqrt=false' enables the fast approximation mode. '--use_fast_math' implies '--prec-sqrt=false'. Default value: true. --fmad {true|false} (-fmad) This option enables (disables) the contraction of floating-point multiplies and adds/subtracts into floating-point multiply-add operations (FMAD, FFMA, or DFMA). '--use_fast_math' implies '--fmad=true'. Default value: true. --extra-device-vectorization (-extra-device-vectorization) This option enables more aggressive device code vectorization. Options for steering cuda compilation. ====================================== --default-stream {legacy|null|per-thread} (-default-stream) Specify the stream that CUDA commands from the compiled program will be sent to by default. legacy The CUDA legacy stream (per context, implicitly synchronizes with other streams). per-thread A normal CUDA stream (per thread, does not implicitly synchronize with other streams). 'null' is a deprecated alias for 'legacy'. Allowed values for this option: 'legacy','null','per-thread'. Default value: 'legacy'. Generic tool options. ===================== --disable-warnings (-w) Inhibit all warning messages. --keep-device-functions (-keep-device-functions) In whole program compilation mode, preserve user defined external linkage __device__ function definitions up to PTX. --source-in-ptx (-src-in-ptx) Interleave source in PTX. May only be used in conjunction with --device-debug or --generate-line-info. --restrict (-restrict) Programmer assertion that all kernel pointer parameters are restrict pointers. --Wreorder (-Wreorder) Generate warnings when member initializers are reordered. --Wdefault-stream-launch (-Wdefault-stream-launch) Generate warning when an explicit stream argument is not provided in the <<<...>>> kernel launch syntax. --Wmissing-launch-bounds (-Wmissing-launch-bounds) Generate warning when a __global__ function does not have an explicit __launch_bounds__ annotation. --Wext-lambda-captures-this (-Wext-lambda-captures-this) Generate warning when an extended lambda implicitly captures 'this'. --Wno-deprecated-declarations (-Wno-deprecated-declarations) Suppress warning on use of deprecated entity. --Wno-deprecated-gpu-targets (-Wno-deprecated-gpu-targets) Suppress warnings about deprecated GPU target architectures. --Werror ,... (-Werror) Make warnings of the specified kinds into errors. The following is the list of warning kinds accepted by this option: cross-execution-space-call Be more strict about unsupported cross execution space calls. The compiler will generate an error instead of a warning for a call from a __host__ __device__ to a __host__ function. reorder Generate errors when member initializers are reordered. deprecated-declarations Generate error on use of a deprecated entity. default-stream-launch Generate error when an explicit stream argument is not provided in the <<<...>>> kernel launch syntax. missing-launch-bounds Generate error when a __global__ function does not have an explicit __launch_bounds__ annotation. ext-lambda-captures-this Generate error when an extended lambda implicitly captures 'this' Allowed values for this option: 'all-warnings','cross-execution-space-call', 'default-stream-launch','deprecated-declarations','ext-lambda-captures-this', 'missing-launch-bounds','reorder'. --resource-usage (-res-usage) Show resource usage such as registers and memory of the GPU code. This option implies '--nvlink-options --verbose' when '--relocatable-device-code=true' is set. Otherwise, it implies '--ptxas-options --verbose'. --extensible-whole-program (-ewp) Do extensible whole program compilation of device code. --no-compress (-no-compress) Do not compress device code in fatbinary. --qpp-config (-qpp-config) Specify the configuration ([[compiler/]version,][target]) for the q++ host compiler. The argument will be forwarded to the q++ compiler with its -V flag. --compile-as-tools-patch (-astoolspatch) Compile patch code for CUDA tools. Implies --keep-device-functions. --list-gpu-code (-code-ls) List the non-accelerated gpu architectures (sm_XX) supported by the compiler and exit. If both --list-gpu-code and --list-gpu-arch are set, the list is displayed using the same format as the --generate-code value. --list-gpu-arch (-arch-ls) List the non-accelerated virtual device architectures (compute_XX) supported by the compiler and exit. If both --list-gpu-code and --list-gpu-arch are set, the list is displayed using the same format as the --generate-code value. --display-error-number (-err-no) This option displays a diagnostic number for any message generated by the CUDA frontend compiler (note: not the host compiler). --no-display-error-number (-no-err-no) This option disables the display of a diagnostic number for any message generated by the CUDA frontend compiler (note: not the host compiler). --diag-error ,... (-diag-error) Emit error for specified diagnostic message(s) generated by the CUDA frontend compiler (note: does not affect diagnostics generated by the host compiler/preprocessor). --diag-suppress ,... (-diag-suppress) Suppress specified diagnostic message(s) generated by the CUDA frontend compiler (note: does not affect diagnostics generated by the host compiler/preprocessor). --diag-warn ,... (-diag-warn) Emit warning for specified diagnostic message(s) generated by the CUDA frontend compiler (note: does not affect diagnostics generated by the host compiler/preprocessor). --host-linker-script {use-lcs|gen-lcs} (-hls) Use the host linker script (GNU/Linux only) to enable support for certain CUDA specific requirements, while building executable files or shared libraries. use-lcs Prepares a host linker script to enable host linker support for relocatable device object files that are larger in size, that would otherwise, in certain cases cause the host linker to fail with relocation truncation error. gen-lcs Generates a host linker script that can be passed to host linker manually, in the case where host linker is invoked separately outside of nvcc. This option can be combined with -shared or -r option to generate linker scripts that can be used while generating host shared libraries or host relocatable links respectively. The file generated using this option must be provided as the last input file to the host linker. Default Output Filename: The output is generated to stdout by default. Use the option -o filename to specify the output filename. A linker script may already be in use and passed to the host linker using the host linker option --script (or -T), then the generated host linker script must augment the existing linker script. In such cases, the option -aug-hls must be used to generate linker script that contains only the augmentation parts. Otherwise, the host linker behaviour is undefined. A host linker option, such as -z with a non-default argument, that can modify the default linker script internally, is incompatible with this option and the behavior of any such usage is undefined. Allowed values for this option: 'gen-lcs','use-lcs'. --augment-host-linker-script (-aug-hls) Enables generation of host linker script that augments an existing host linker script (GNU/Linux only). See option --host-linker-script for more details. --relocatable-link (-r) Do relocatable / incremental link. --brief-diagnostics {true|false} (-brief-diag) This option disables or enables showing preprocessed source line and column info in a diagnostic. The --brief-diagnostics=true will not show the source line and column info. Default value: false. --jump-table-density (-jtd) This option sets the case-density threshold for jump table generation for switch statements. It ranges from 0 to 101 inclusively. When the case-density percentage reaches this threshold, switch statements will be converted to jump tables. Default value: 101. --relocatable-ptx (-reloc-ptx) Insert PTX from relocatable fatbins within objects into the result fatbin. --device-stack-protector {true|false} (-device-stack-protector) Enable (disable) the generation of stack canaries in device code. Default value: false. --compress-mode [default,size,speed,balance,none] (-compress-mode) Specify compression mode for fatbinary. --frandom-seed (-frandom-seed) The user specified random seed will be used to replace random numbers used in generating symbol names and variable names. The option can be used to generate deterministically identical ptx and object files. If the input value is a valid number (decimal, octal, or hex), it will be used directly as the random seed. Otherwise, the CRC value of the passed string will be used instead. NVCC will also pass the option, as well as the user specified value to host compilers, if the host compiler is either GCC or Clang, since they support -frandom-seed option as well. Users are responsible for assigning different seed to different files. --sanitize (-sanitize) Enable compiler instrumentation for the sanitizer tool specified by the option, such as initcheck, racecheck, or memcheck. --jobserver (-jobserver) Enable GNU Make Jobserver support (for use with `make -j`). --help (-h) Print this help information on this tool. --version (-V) Print version information on this tool. --options-file ,... (-optf) Include command line options from specified file. + cmake -B /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5 -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/lib64/ccache/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/lib64/ccache/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF. -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - NOTFOUND -- Configuring done (8.9s) -- Generating done (0.0s) -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5 + cmake --build /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5 [ 1%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.c:5851:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5851 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.c:5844:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5844 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 2%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-alloc.c:420:12: warning: ‘ggml_vbuffer_n_chunks’ defined but not used [-Wunused-function] 420 | static int ggml_vbuffer_n_chunks(struct vbuffer * buf) { | ^~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-alloc.c:108:13: warning: ‘ggml_buffer_address_less’ defined but not used [-Wunused-function] 108 | static bool ggml_buffer_address_less(struct buffer_address a, struct buffer_address b) { | ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ [ 3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o [ 5%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.c:4068:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4068 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/mem_hip.cpp.o [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/mem_nvml.cpp.o /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/mem_nvml.cpp: In function ‘int ggml_nvml_init()’: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/mem_nvml.cpp:131:9: warning: unused variable ‘status’ [-Wunused-variable] 131 | int status = nvml.nvmlInit_v2(); | ^~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/mem_nvml.cpp:12: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h: At global scope: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so [ 8%] Built target ggml-base [ 9%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 9%] Built target ggml-cpu-x64-feats [ 9%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 12%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o [ 12%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 13%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 14%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 19%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 21%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-x64.so [ 21%] Built target ggml-cpu-x64 [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 22%] Built target ggml-cpu-sse42-feats [ 23%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 25%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o [ 26%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 27%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 32%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 32%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 34%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sse42.so [ 34%] Built target ggml-cpu-sse42 [ 35%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 35%] Built target ggml-cpu-sandybridge-feats [ 36%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o [ 39%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 41%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 42%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 44%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 45%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 46%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 46%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 47%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sandybridge.so [ 47%] Built target ggml-cpu-sandybridge [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 48%] Built target ggml-cpu-haswell-feats [ 49%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 51%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 51%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o [ 52%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 53%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 54%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 54%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 55%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 56%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 57%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 57%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 58%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 59%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 60%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-haswell.so [ 60%] Built target ggml-cpu-haswell [ 61%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 61%] Built target ggml-cpu-skylakex-feats [ 62%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 62%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 63%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 64%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o [ 65%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 65%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 66%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 67%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 69%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 70%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 71%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 71%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 73%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-skylakex.so [ 73%] Built target ggml-cpu-skylakex [ 74%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 74%] Built target ggml-cpu-icelake-feats [ 75%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 77%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o [ 78%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 81%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 84%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 86%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-icelake.so [ 86%] Built target ggml-cpu-icelake [ 87%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 87%] Built target ggml-cpu-alderlake-feats [ 88%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o [ 91%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 92%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 94%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 95%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 96%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 96%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 97%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 98%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [100%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-alderlake.so [100%] Built target ggml-cpu-alderlake + go build go: downloading github.com/spf13/cobra v1.7.0 go: downloading github.com/containerd/console v1.0.3 go: downloading github.com/mattn/go-runewidth v0.0.14 go: downloading github.com/olekukonko/tablewriter v0.0.5 go: downloading golang.org/x/crypto v0.36.0 go: downloading golang.org/x/sync v0.12.0 go: downloading golang.org/x/term v0.30.0 go: downloading github.com/rivo/uniseg v0.2.0 go: downloading golang.org/x/sys v0.31.0 go: downloading github.com/google/uuid v1.6.0 go: downloading golang.org/x/text v0.23.0 go: downloading github.com/emirpasic/gods/v2 v2.0.0-alpha go: downloading github.com/gin-contrib/cors v1.7.2 go: downloading github.com/gin-gonic/gin v1.10.0 go: downloading golang.org/x/image v0.22.0 go: downloading github.com/spf13/pflag v1.0.5 go: downloading github.com/d4l3k/go-bfloat16 v0.0.0-20211005043715-690c3bdd05f1 go: downloading github.com/nlpodyssey/gopickle v0.3.0 go: downloading github.com/pdevine/tensor v0.0.0-20240510204454-f88f4562727c go: downloading github.com/x448/float16 v0.8.4 go: downloading gonum.org/v1/gonum v0.15.0 go: downloading google.golang.org/protobuf v1.34.1 go: downloading github.com/agnivade/levenshtein v1.1.1 go: downloading github.com/gin-contrib/sse v0.1.0 go: downloading github.com/mattn/go-isatty v0.0.20 go: downloading golang.org/x/net v0.38.0 go: downloading github.com/dlclark/regexp2 v1.11.4 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/apache/arrow/go/arrow v0.0.0-20211112161151-bc219186db40 go: downloading github.com/chewxy/hm v1.0.0 go: downloading github.com/chewxy/math32 v1.11.0 go: downloading github.com/google/flatbuffers v24.3.25+incompatible go: downloading go4.org/unsafe/assume-no-moving-gc v0.0.0-20231121144256-b99613f794b6 go: downloading gorgonia.org/vecf32 v0.9.0 go: downloading gorgonia.org/vecf64 v0.9.0 go: downloading github.com/go-playground/validator/v10 v10.20.0 go: downloading github.com/pelletier/go-toml/v2 v2.2.2 go: downloading github.com/ugorji/go/codec v1.2.12 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading golang.org/x/exp v0.0.0-20250218142911-aa4b98e5adaa go: downloading golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading github.com/golang/protobuf v1.5.4 go: downloading github.com/xtgo/set v1.0.0 go: downloading github.com/gabriel-vasile/mimetype v1.4.3 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/leodido/go-urn v1.4.0 go: downloading github.com/go-playground/locales v0.14.1 + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.Yfi512 + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + '[' /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT '!=' / ']' + rm -rf /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT ++ dirname /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT + mkdir -p /builddir/build/BUILD/ollama-0.12.5-build + mkdir /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + cd ollama-0.12.5 + install -Dm0755 /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ollama /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/bin/ollama + install -Dm0644 /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ollamad-main/ollamad.service /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/lib/systemd/system/ollamad.service + install -Dm0644 /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ollamad-main/ollamad.conf /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/etc/ollama/ollamad.conf + /usr/bin/find-debuginfo -j4 --strict-build-id -m -i --build-id-seed 0.12.5-1.fc42 --unique-debug-suffix -0.12.5-1.fc42.x86_64 --unique-debug-src-base ollama-0.12.5-1.fc42.x86_64 --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 -S debugsourcefiles.list /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5 find-debuginfo: starting Extracting debug info from 1 files warning: Unsupported auto-load script at offset 0 in section .debug_gdb_scripts of file /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/bin/ollama. Use `info auto-load python-scripts [REGEXP]' to list them. DWARF-compressing 1 files dwz: ./usr/bin/ollama-0.12.5-1.fc42.x86_64.debug: Found compressed .debug_aranges section, not attempting dwz compression sepdebugcrcfix: Updated 0 CRC32s, 1 CRC32s did match. Creating .debug symlinks for symlinks to ELF files Copying sources found by 'debugedit -l' to /usr/src/debug/ollama-0.12.5-1.fc42.x86_64 find-debuginfo: done + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/redhat/brp-strip-lto /usr/bin/strip + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/check-rpaths + /usr/lib/rpm/redhat/brp-mangle-shebangs + /usr/lib/rpm/brp-remove-la-files + env /usr/lib/rpm/redhat/brp-python-bytecompile '' 1 0 -j4 + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/bin/add-determinism --brp -j4 /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT Scanned 103 directories and 305 files, processed 0 inodes, 0 modified (0 replaced + 0 rewritten), 0 unsupported format, 0 errors Reading /builddir/build/BUILD/ollama-0.12.5-build/SPECPARTS/rpm-debuginfo.specpart Processing files: ollama-0.12.5-1.fc42.x86_64 Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.s3lWhi + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + cd ollama-0.12.5 + DOCDIR=/builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/doc/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export DOCDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/README.md /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/doc/ollama + RPM_EC=0 ++ jobs -p + exit 0 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.nzV0BQ + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + cd ollama-0.12.5 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/licenses/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/licenses/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/LICENSE /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/licenses/ollama + RPM_EC=0 ++ jobs -p + exit 0 Provides: config(ollama) = 0.12.5-1.fc42 ollama = 0.12.5-1.fc42 ollama(x86-64) = 0.12.5-1.fc42 Requires(interp): /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(pre): /usr/bin/getent /usr/sbin/useradd Requires(post): /bin/sh Requires(preun): /bin/sh Requires(postun): /bin/sh Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.7)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.4)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libresolv.so.2()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.11)(64bit) libstdc++.so.6(CXXABI_1.3.13)(64bit) libstdc++.so.6(CXXABI_1.3.15)(64bit) libstdc++.so.6(CXXABI_1.3.2)(64bit) libstdc++.so.6(CXXABI_1.3.3)(64bit) libstdc++.so.6(CXXABI_1.3.5)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.14)(64bit) libstdc++.so.6(GLIBCXX_3.4.15)(64bit) libstdc++.so.6(GLIBCXX_3.4.17)(64bit) libstdc++.so.6(GLIBCXX_3.4.18)(64bit) libstdc++.so.6(GLIBCXX_3.4.19)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) libstdc++.so.6(GLIBCXX_3.4.22)(64bit) libstdc++.so.6(GLIBCXX_3.4.25)(64bit) libstdc++.so.6(GLIBCXX_3.4.26)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) libstdc++.so.6(GLIBCXX_3.4.32)(64bit) libstdc++.so.6(GLIBCXX_3.4.9)(64bit) rtld(GNU_HASH) Processing files: ollama-debugsource-0.12.5-1.fc42.x86_64 Provides: ollama-debugsource = 0.12.5-1.fc42 ollama-debugsource(x86-64) = 0.12.5-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Processing files: ollama-debuginfo-0.12.5-1.fc42.x86_64 Provides: debuginfo(build-id) = 8eb2fe2e028e65578bd81a7040e22769536178c4 ollama-debuginfo = 0.12.5-1.fc42 ollama-debuginfo(x86-64) = 0.12.5-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.5-1.fc42 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT Wrote: /builddir/build/RPMS/ollama-debugsource-0.12.5-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-0.12.5-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-debuginfo-0.12.5-1.fc42.x86_64.rpm Executing(rmbuild): /bin/sh -e /var/tmp/rpm-tmp.dX87gY + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + test -d /builddir/build/BUILD/ollama-0.12.5-build + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w /builddir/build/BUILD/ollama-0.12.5-build + rm -rf /builddir/build/BUILD/ollama-0.12.5-build + RPM_EC=0 ++ jobs -p + exit 0 Finish: rpmbuild ollama-0.12.5-1.fc42.src.rpm Finish: build phase for ollama-0.12.5-1.fc42.src.rpm INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-42-x86_64-1760489761.899953/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names INFO: Done(/var/lib/copr-rpmbuild/results/ollama-0.12.5-1.fc42.src.rpm) Config(child) 10 minutes 24 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot Finish: run Running FedoraReview tool Running: fedora-review --no-colors --prebuilt --rpm-spec --name ollama --mock-config /var/lib/copr-rpmbuild/results/configs/child.cfg cmd: ['fedora-review', '--no-colors', '--prebuilt', '--rpm-spec', '--name', 'ollama', '--mock-config', '/var/lib/copr-rpmbuild/results/configs/child.cfg'] cwd: /var/lib/copr-rpmbuild/results rc: 0 stdout: Cache directory "/var/lib/copr-rpmbuild/results/cache/libdnf5" does not exist. Nothing to clean. Review template in: /var/lib/copr-rpmbuild/results/ollama/review.txt fedora-review is automated tool, but *YOU* are responsible for manually reviewing the results and finishing the review. Do not just copy-paste the results without understanding them. stderr: INFO: Processing local files: ollama INFO: Getting .spec and .srpm Urls from : Local files in /var/lib/copr-rpmbuild/results INFO: --> SRPM url: file:///var/lib/copr-rpmbuild/results/ollama-0.12.5-1.fc42.src.rpm INFO: Using review directory: /var/lib/copr-rpmbuild/results/ollama INFO: Downloading (Source1): https://github.com/mwprado/ollamad/archive/refs/heads/main.zip INFO: Downloading (Source0): https://github.com/ollama/ollama/archive/refs/tags/v0.12.5.zip INFO: Running checks and generating report INFO: Installing built package(s) INFO: Reading configuration from /etc/mock/site-defaults.cfg INFO: Reading configuration from /etc/mock/chroot-aliases.cfg INFO: Reading configuration from /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: WARNING: Probably non-rawhide buildroot used. Rawhide should be used for most package reviews INFO: Active plugins: C/C++, Shell-api, Generic Updating and loading repositories: Repositories loaded. Updating and loading repositories: Repositories loaded. INFO: ExclusiveArch dependency checking disabled, enable with EXARCH flag Cache directory "/var/lib/copr-rpmbuild/results/cache/libdnf5" does not exist. Nothing to clean. Review template in: /var/lib/copr-rpmbuild/results/ollama/review.txt fedora-review is automated tool, but *YOU* are responsible for manually reviewing the results and finishing the review. Do not just copy-paste the results without understanding them. Moving the results into `fedora-review' directory. Review template in: /var/lib/copr-rpmbuild/results/fedora-review/review.txt FedoraReview finished Running RPMResults tool Package info: { "packages": [ { "name": "ollama", "epoch": null, "version": "0.12.5", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama-debuginfo", "epoch": null, "version": "0.12.5", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.5", "release": "1.fc42", "arch": "src" }, { "name": "ollama-debugsource", "epoch": null, "version": "0.12.5", "release": "1.fc42", "arch": "x86_64" } ] } RPMResults finished