## START: Set by rpmautospec ## (rpmautospec version 0.7.3) ## RPMAUTOSPEC: autorelease, autochangelog %define autorelease(e:s:pb:n) %{?-p:0.}%{lua: release_number = 6; base_release_number = tonumber(rpm.expand("%{?-b*}%{!?-b:1}")); print(release_number + base_release_number - 1); }%{?-e:.%{-e*}}%{?-s:.%{-s*}}%{!?-n:%{?dist}} ## END: Set by rpmautospec # For the extra python package gguf that comes with llama-cpp %global pypi_name gguf %global pypi_version 0.10.0 # Some optional subpackages %bcond_with examples %if %{with examples} %global build_examples ON %else %global build_examples OFF %endif %bcond_with test %if %{with test} %global build_test ON %else %global build_test OFF %endif %bcond_with check Summary: Port of Facebook's LLaMA model in C/C++ Name: llama-cpp # Licensecheck reports # # *No copyright* The Unlicense # ---------------------------- # common/base64.hpp # common/stb_image.h # These are public domain # # MIT License # ----------- # LICENSE # ... # This is the main license License: MIT AND Apache-2.0 AND LicenseRef-Fedora-Public-Domain Version: b4094 Release: %autorelease URL: https://github.com/ggerganov/llama.cpp Source0: %{url}/archive/%{version}.tar.gz#/llama.cpp-%{version}.tar.gz ExclusiveArch: x86_64 aarch64 %ifarch x86_64 %bcond_without rocm %else %bcond_with rocm %endif %if %{with rocm} %global build_hip ON %global toolchain rocm # hipcc does not support some clang flags %global build_cxxflags %(echo %{optflags} | sed -e 's/-fstack-protector-strong/-Xarch_host -fstack-protector-strong/' -e 's/-fcf-protection/-Xarch_host -fcf-protection/') %else %global build_hip OFF %global toolchain gcc %endif BuildRequires: xxd BuildRequires: git BuildRequires: cmake BuildRequires: curl BuildRequires: wget BuildRequires: langpacks-en # above are packages in .github/workflows/server.yml BuildRequires: libcurl-devel BuildRequires: gcc-c++ BuildRequires: openmpi BuildRequires: pthreadpool-devel %if %{with examples} BuildRequires: python3-devel BuildRequires: python3dist(pip) BuildRequires: python3dist(poetry) %endif %if %{with rocm} BuildRequires: hipblas-devel BuildRequires: rocm-comgr-devel BuildRequires: rocm-hip-devel BuildRequires: rocblas-devel BuildRequires: hipblas-devel BuildRequires: rocm-runtime-devel BuildRequires: rocm-rpm-macros BuildRequires: rocm-rpm-macros-modules Requires: rocblas Requires: hipblas %endif Requires: curl Recommends: numactl %description The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook * Plain C/C++ implementation without dependencies * Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks * AVX, AVX2 and AVX512 support for x86 architectures * Mixed F16 / F32 precision * 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support * CUDA, Metal and OpenCL GPU backend support The original implementation of llama.cpp was hacked in an evening. Since then, the project has improved significantly thanks to many contributions. This project is mainly for educational purposes and serves as the main playground for developing new features for the ggml library. %package devel Summary: Port of Facebook's LLaMA model in C/C++ Requires: %{name}%{?_isa} = %{version}-%{release} %description devel The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook * Plain C/C++ implementation without dependencies * Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks * AVX, AVX2 and AVX512 support for x86 architectures * Mixed F16 / F32 precision * 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support * CUDA, Metal and OpenCL GPU backend support The original implementation of llama.cpp was hacked in an evening. Since then, the project has improved significantly thanks to many contributions. This project is mainly for educational purposes and serves as the main playground for developing new features for the ggml library. %if %{with test} %package test Summary: Tests for %{name} Requires: %{name}%{?_isa} = %{version}-%{release} %description test %{summary} %endif %if %{with examples} %package examples Summary: Examples for %{name} Requires: %{name}%{?_isa} = %{version}-%{release} Requires: python3dist(numpy) Requires: python3dist(torch) Requires: python3dist(sentencepiece) %description examples %{summary} %endif %prep %autosetup -p1 -n llama.cpp-%{version} # verson the *.so sed -i -e 's/POSITION_INDEPENDENT_CODE ON/POSITION_INDEPENDENT_CODE ON SOVERSION %{version}/' src/CMakeLists.txt sed -i -e 's/POSITION_INDEPENDENT_CODE ON/POSITION_INDEPENDENT_CODE ON SOVERSION %{version}/' ggml/src/CMakeLists.txt # no android needed rm -rf exmples/llma.android # git cruft find . -name '.gitignore' -exec rm -rf {} \; %build %if %{with examples} cd %{_vpath_srcdir}/gguf-py %pyproject_wheel cd - %endif %if %{with rocm} module load rocm/default %endif %cmake \ -DCMAKE_INSTALL_LIBDIR=%{_lib} \ -DCMAKE_SKIP_RPATH=ON \ -DGGML_AVX=OFF \ -DGGML_AVX2=OFF \ -DGGML_AVX512=OFF \ -DGGML_AVX512_VBMI=OFF \ -DGGML_AVX512_VNNI=OFF \ -DGGML_FMA=OFF \ -DGGML_F16C=OFF \ %if %{with rocm} -DGGML_HIP=%{build_hip} \ -DAMDGPU_TARGETS=${ROCM_GPUS} \ %endif -DLLAMA_BUILD_EXAMPLES=%{build_examples} \ -DLLAMA_BUILD_TESTS=%{build_test} %cmake_build %if %{with rocm} module purge %endif %install %if %{with examples} cd %{_vpath_srcdir}/gguf-py %pyproject_install cd - %endif %cmake_install rm -rf %{buildroot}%{_libdir}/libggml_shared.* %if %{with examples} mkdir -p %{buildroot}%{_datarootdir}/%{name} cp -r %{_vpath_srcdir}/examples %{buildroot}%{_datarootdir}/%{name}/ cp -r %{_vpath_srcdir}/models %{buildroot}%{_datarootdir}/%{name}/ cp -r %{_vpath_srcdir}/README.md %{buildroot}%{_datarootdir}/%{name}/ rm -rf %{buildroot}%{_datarootdir}/%{name}/examples/llama.android %else rm %{buildroot}%{_bindir}/convert*.py %endif %if %{with test} %if %{with check} %check %ctest %endif %endif %files %license LICENSE %{_libdir}/libllama.so.%{version} %{_libdir}/libggml.so.%{version} %{_libdir}/libggml-base.so.%{version} %if %{with rocm} %{_libdir}/libggml-hip.so %endif %files devel %dir %{_libdir}/cmake/llama %doc README.md %{_includedir}/ggml.h %{_includedir}/ggml-*.h %{_includedir}/llama.h %{_libdir}/libllama.so %{_libdir}/libggml.so %{_libdir}/libggml-base.so %{_libdir}/libggml-cpu.so %{_libdir}/cmake/llama/*.cmake %{_exec_prefix}/lib/pkgconfig/llama.pc %if %{with test} %files test %{_bindir}/test-* %endif %if %{with examples} %files examples %{_bindir}/convert_hf_to_gguf.py %{_bindir}/gguf-* %{_bindir}/llama-* %{_datarootdir}/%{name}/ %{_libdir}/libllava_shared.so %{python3_sitelib}/%{pypi_name} %{python3_sitelib}/%{pypi_name}*.dist-info %{python3_sitelib}/scripts %endif %changelog ## START: Generated by rpmautospec * Wed Nov 27 2024 Debarshi Ray - b4094-6 - Remove misspelt and unused build options * Wed Nov 27 2024 Debarshi Ray - b4094-5 - Silence mixed-use-of-spaces-and-tabs * Wed Nov 27 2024 Debarshi Ray - b4094-4 - Fix the build option to use HIP * Wed Nov 27 2024 Debarshi Ray - b4094-3 - Fix the build options to use AVX, FMA and F16C instructions * Tue Nov 26 2024 Tomas Tomecek - b4094-2 - run upstream tests in Fedora CI * Mon Nov 25 2024 Tomas Tomecek - b4094-1 - Update to b4094 * Fri Oct 18 2024 Mohammadreza Hendiani - b3837-4 - updated dependencies and fixed rocm Issues in rawhide, and, f40, (f39 doesn't have relevant dependencies) * Fri Oct 11 2024 Tom Rix - b3837-3 - Add ROCm backend * Thu Oct 10 2024 Tom Rix - b3837-2 - ccache is not available on RHEL. * Sat Sep 28 2024 Tom Rix - b3837-1 - Update to b3837 * Wed Sep 04 2024 Tom Rix - b3667-1 - Update to b3667 * Thu Jul 18 2024 Fedora Release Engineering - b3184-4 - Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild * Sat Jun 22 2024 Mohammadreza Hendiani - b3184-3 - added changelog * Sat Jun 22 2024 Mohammadreza Hendiani - b3184-2 - added .pc file * Sat Jun 22 2024 Mohammadreza Hendiani - b3184-1 - upgraded to b3184 which is used by llama-cpp-python v0.2.79 * Tue May 21 2024 Mohammadreza Hendiani - b2879-7 - removed old file names .gitignore * Sun May 19 2024 Tom Rix - b2879-6 - Remove old sources * Sun May 19 2024 Tom Rix - b2879-5 - Include missing sources * Sat May 18 2024 Mohammadreza Hendiani - b2879-4 - added build dependencies and added changelog * Sat May 18 2024 Mohammadreza Hendiani - b2879-3 - added aditional source * Fri May 17 2024 Mohammadreza Hendiani - b2879-2 - updated * Fri May 17 2024 Mohammadreza Hendiani - b2879-1 - updated and fix build bugs * Mon May 13 2024 Mohammadreza Hendiani - b2861-7 - removed source 1 * Mon May 13 2024 Mohammadreza Hendiani - b2861-6 - added llama.cpp-b2861.tar.gz to .gitignore * Mon May 13 2024 Mohammadreza Hendiani - b2861-5 - fixed source 1 url * Mon May 13 2024 Mohammadreza Hendiani - b2861-4 - added tag release as source 1 * Mon May 13 2024 Mohammadreza Hendiani - b2861-3 - fix source hash * Sun May 12 2024 Mohammadreza Hendiani - b2861-2 - fix mistake mistake in version * Sun May 12 2024 Mohammadreza Hendiani - b2861-1 - update b2861 * Sun May 12 2024 Mohammadreza Hendiani - b2860-2 - added changelog * Sun May 12 2024 Mohammadreza Hendiani - b2860-1 - bump version to b2860 * Sun May 12 2024 Mohammadreza Hendiani - b2619-5 - upgrade to b2860 tag * Sun May 12 2024 Mohammadreza Hendiani - b2619-4 - added ccache build dependency because LLAMA_CCACHE=ON on by default * Sun May 12 2024 Mohammadreza Hendiani - b2619-3 - added numactl as Weak dependency * Thu Apr 11 2024 Tom Rix - b2619-2 - New sources * Thu Apr 11 2024 Tomas Tomecek - b2619-1 - Update to b2619 (required by llama-cpp-python-0.2.60) * Sat Mar 23 2024 Tom Rix - b2417-2 - Fix test subpackage * Sat Mar 23 2024 Tom Rix - b2417-1 - Initial package ## END: Generated by rpmautospec