## START: Set by rpmautospec ## (rpmautospec version 0.7.3) ## RPMAUTOSPEC: autorelease, autochangelog %define autorelease(e:s:pb:n) %{?-p:0.}%{lua: release_number = 1; base_release_number = tonumber(rpm.expand("%{?-b*}%{!?-b:1}")); print(release_number + base_release_number - 1); }%{?-e:.%{-e*}}%{?-s:.%{-s*}}%{!?-n:%{?dist}} ## END: Set by rpmautospec %global pypi_name llama-cpp-python %global pypi_version 0.3.2 # it's all python code %global debug_package %{nil} Name: python-%{pypi_name} Version: %{pypi_version} Release: %autorelease License: MIT Summary: Simple Python bindings for @ggerganov's llama.cpp library URL: https://github.com/abetlen/llama-cpp-python Source: %{url}/archive/v%{version}/%{pypi_name}-%{version}.tar.gz Patch1: 0001-don-t-build-llama.cpp-and-llava.patch Patch2: 0002-search-for-libllama-so-in-usr-lib64.patch Patch3: https://github.com/abetlen/llama-cpp-python/pull/1718.patch#/0003-drop-optional-dependency-of-scikit-build-core.patch %bcond_without test # this is what llama-cpp is on # and this library is by default installed in /usr/lib64/python3.12/site-packages/llama_cpp/__init__.py ExclusiveArch: x86_64 aarch64 BuildRequires: git-core BuildRequires: gcc BuildRequires: gcc-c++ BuildRequires: ninja-build BuildRequires: python3-devel BuildRequires: llama-cpp-devel %if %{with test} BuildRequires: python3-pytest BuildRequires: python3-scipy BuildRequires: python3-huggingface-hub %endif %generate_buildrequires %pyproject_buildrequires %description %{pypi_name} provides: Low-level access to C API via `ctypes` interface. High-level Python API for text completion. OpenAI compatible web server %package -n python3-%{pypi_name} Summary: %{summary} # -devel has the unversioned libllama.so Requires: llama-cpp-devel %description -n python3-%{pypi_name} %{pypi_name} provides: Low-level access to C API via `ctypes` interface. High-level Python API for text completion. OpenAI compatible web server %prep %autosetup -p1 -n %{pypi_name}-%{version} -Sgit %build %pyproject_wheel %if %{with test} %check # these 3 llama tests need ggml-vocab-llama-spm model, we'll run them in testing farm, see plans/ %pytest -v -k 'not test_llama_cpp_tokenization and not test_real_llama and not test_real_model' tests/ %endif %install %pyproject_install %pyproject_save_files -l llama_cpp -L %files -n python3-%{pypi_name} -f %{pyproject_files} %license LICENSE.md %doc README.md %changelog ## START: Generated by rpmautospec * Thu Nov 28 2024 Tomas Tomecek - 0.3.2-1 - update to 0.3.2 * Fri Nov 22 2024 Tomas Tomecek - 0.3.1-2 - run upstream tests in Fedora CI * Mon Oct 07 2024 Tomas Tomecek - 0.3.1-1 - 0.3.1 release * Thu Sep 12 2024 Tomas Tomecek - 0.2.75-9 - skip failing tests and note why * Thu Sep 12 2024 Tomas Tomecek - 0.2.75-8 - Correct patch path for scikit-build patch * Wed Sep 11 2024 Cristian Le - 0.2.75-7 - Drop optional_dependency in scikit-build-core * Fri Jul 19 2024 Fedora Release Engineering - 0.2.75-6 - Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild * Wed Jun 19 2024 Python Maint - 0.2.75-4 - Rebuilt for Python 3.13 * Thu May 23 2024 Tomas Tomecek - 0.2.75-3 - build with tests on * Thu May 23 2024 Tomas Tomecek - 0.2.75-2 - use %%%%pyproject_save_files -L * Thu May 23 2024 Mohammadreza Hendiani - 0.2.75-1 - update source to llama-cpp-python-0.2.75.tar.gz * Wed Apr 17 2024 Tomas Tomecek - 0.2.60-1 - initial import, version 0.2.60 ## END: Generated by rpmautospec