Name: litert Version: 2.17.0 Release: 1%{?dist} Summary: high-performance runtime for on-device AI URL: https://github.com/google-ai-edge/LiteRT Source0: https://github.com/tensorflow/tensorflow/archive/v%{version}/tensorflow-%{version}.tar.gz Source1: litert.pc Patch0: tensorflow-lite-native.patch License: Apache-2.0 Provides: %{name} = %{version}-%{release} Provides: %{name}%{_isa} = %{version}-%{release} BuildRequires: cmake BuildRequires: git BuildRequires: clang BuildRequires: llvm BuildRequires: patchelf BuildRequires: abseil-cpp-devel BuildRequires: eigen3-devel BuildRequires: flatbuffers-devel BuildRequires: cpuinfo-devel BuildRequires: xnnpack-devel BuildRequires: protobuf-devel BuildRequires: psimd-devel BuildRequires: fxdiv-devel BuildRequires: FP16-devel BuildRequires: pthreadpool-devel BuildRequires: opencl-headers BuildRequires: vulkan-headers %description LiteRT (short for Lite Runtime) is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision. Since its debut in 2017, TFLite has enabled developers to bring ML-powered experiences to over 100K apps running on 2.7B devices. More recently, TFLite has grown beyond its TensorFlow roots to support models authored in PyTorch, JAX, and Keras with the same leading performance. The name LiteRT captures this multi-framework vision for the future: enabling developers to start with any popular framework and run their model on-device with exceptional performance. LiteRT, part of the Google AI Edge suite of tools, is the runtime that lets you seamlessly deploy ML and AI models on Android, iOS, and embedded devices. With AI Edge's robust model conversion and optimization tools, you can ready both open-source and custom models for on-device development. %package devel Summary: Development files for %{name} Provides: %{name}-devel = %{version}-%{release} Provides: %{name}-devel%{_isa} = %{version}-%{release} Requires: %{name}%{?_isa} = %{version}-%{release} %description devel %{summary}. %prep %autosetup -n tensorflow-%{version} -p1 %cmake -DTFLITE_ENABLE_GPU=ON -DSYSTEM_PTHREADPOOL=ON tensorflow/lite %build %cmake_build %install install -d $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite install tensorflow/lite/*.h $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite install -d $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/c install tensorflow/lite/c/{common.h,builtin_op_data.h,c_api.h,c_api_experimental.h,c_api_types.h} $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/c install -d $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/core/async/c install tensorflow/lite/core/async/c/*.h $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/core/async/c install -d $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/core/c install tensorflow/lite/core/c/*.h $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/core/c install -d $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/delegates/external install tensorflow/lite/delegates/external/*.h $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/delegates/external install -d $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/delegates/gpu install tensorflow/lite/delegates/gpu/{delegate.h,delegate_options.h,gl_delegate.h,metal_delegate.h} $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/delegates/gpu install -d $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/delegates/xnnpack install tensorflow/lite/delegates/xnnpack/xnnpack_delegate.h $RPM_BUILD_ROOT/%{_includedir}/tensorflow/lite/delegates/xnnpack install -d $RPM_BUILD_ROOT/%{_libexecdir}/litert install redhat-linux-build/_deps/farmhash-build/libfarmhash.so $RPM_BUILD_ROOT/%{_libexecdir}/litert install redhat-linux-build/_deps/fft2d-build/libfft2d_fftsg2d.so $RPM_BUILD_ROOT/%{_libexecdir}/litert install redhat-linux-build/_deps/fft2d-build/libfft2d_fftsg.so $RPM_BUILD_ROOT/%{_libexecdir}/litert install redhat-linux-build/_deps/xnnpack-build/libXNNPACK.so $RPM_BUILD_ROOT/%{_libexecdir}/litert install redhat-linux-build/pthreadpool/libpthreadpool.so $RPM_BUILD_ROOT/%{_libexecdir}/litert install redhat-linux-build/_deps/ruy-build/third_party/cpuinfo/libcpuinfo.so $RPM_BUILD_ROOT/%{_libexecdir}/litert install -d $RPM_BUILD_ROOT/%{_libdir}/pkgconfig install redhat-linux-build/libtensorflow-lite.so $RPM_BUILD_ROOT/%{_libdir}/libtensorflow-lite.so.%{version} ln -s libtensorflow-lite.so.%{version} $RPM_BUILD_ROOT/%{_libdir}/libtensorflow-lite.so install %{SOURCE1} $RPM_BUILD_ROOT/%{_libdir}/pkgconfig patchelf --set-rpath %{_libexecdir}/litert $RPM_BUILD_ROOT/%{_libdir}/libtensorflow-lite.so $RPM_BUILD_ROOT/%{_libexecdir}/litert/{libfft2d_fftsg2d.so,libXNNPACK.so} %check %ctest %files %license LICENSE %{_libdir}/libtensorflow-lite.so.%{version} %dir %{_libexecdir}/litert %{_libexecdir}/litert/*.so %files devel %doc tensorflow/lite/c/README.md %dir %{_includedir}/tensorflow %dir %{_includedir}/tensorflow/lite %dir %{_includedir}/tensorflow/lite/c %dir %{_includedir}/tensorflow/lite/core %dir %{_includedir}/tensorflow/lite/core/async %dir %{_includedir}/tensorflow/lite/core/async/c %dir %{_includedir}/tensorflow/lite/core/c %dir %{_includedir}/tensorflow/lite/delegates %dir %{_includedir}/tensorflow/lite/delegates/external %dir %{_includedir}/tensorflow/lite/delegates/gpu %dir %{_includedir}/tensorflow/lite/delegates/xnnpack %{_includedir}/tensorflow/lite/*.h %{_includedir}/tensorflow/lite/c/*.h %{_includedir}/tensorflow/lite/core/async/c/*.h %{_includedir}/tensorflow/lite/core/c/*.h %{_includedir}/tensorflow/lite/delegates/external/*.h %{_includedir}/tensorflow/lite/delegates/gpu/*.h %{_includedir}/tensorflow/lite/delegates/xnnpack/*.h %{_libdir}/libtensorflow-lite.so %{_libdir}/pkgconfig/litert.pc %changelog * Wed Oct 09 2024 Thomas Sailer - 2.17.0-1 - initial package