Name: juicefs Version: 1.1.0 Release: 1%{?dist} Summary: A distributed POSIX file system built on top of Redis and S3. License: Apache-2.0 URL: https://juicefs.com Source0: https://github.com/juicedata/%{name}/archive/refs/tags/v%{version}.tar.gz BuildRequires: golang BuildRequires: gcc BuildRequires: make BuildRequires: git Requires: fuse %description JuiceFS is a high-performance POSIX file system released under Apache License 2.0, particularly designed for the cloud-native environment. The data, stored via JuiceFS, will be persisted in object storage (e.g. Amazon S3), and the corresponding metadata can be persisted in various database engines such as Redis, MySQL, and TiKV based on the scenarios and requirements. With JuiceFS, massive cloud storage can be directly connected to big data, machine learning, artificial intelligence, and various application platforms in production environments. Without modifying code, the massive cloud storage can be used as efficiently as local storage. * Fully POSIX-compatible: Use as a local file system, seamlessly docking with existing applications without breaking business workflow. * Fully Hadoop-compatible: JuiceFS' Hadoop Java SDK is compatible with Hadoop 2.x and Hadoop 3.x as well as a variety of components in the Hadoop ecosystems. * S3-compatible: JuiceFS' S3 Gateway provides an S3-compatible interface. * Cloud Native: A Kubernetes CSI Driver is provided for easily using JuiceFS in Kubernetes. * Shareable: JuiceFS is a shared file storage that can be read and written by thousands of clients. * Strong Consistency: The confirmed modification will be immediately visible on all the servers mounted with the same file system. * Outstanding Performance: The latency can be as low as a few milliseconds, and the throughput can be expanded nearly unlimitedly (depending on the size of the object storage). * Data Encryption: Supports data encryption in transit and at rest. * Global File Locks: JuiceFS supports both BSD locks (flock) and POSIX record locks (fcntl). * Data Compression: JuiceFS supports LZ4 or Zstandard to compress all your data. %global debug_package %{nil} %prep %autosetup %build make %{?_smp_mflags} %install mkdir -p %{buildroot}/%{_bindir} install -m 0755 %{name} %{buildroot}/%{_bindir}/%{name} %files %{_bindir}/%{name} %license LICENSE %changelog * Wed Apr 19 2023 herald - 1.0.4-1 - cmd/format: remove the testing directory after test (#3418) - cmd/load: support loading metadata from an encrypted file (#3311) - meta: recreate the client session with info if it was cleaned before (#3190, #3197) - meta: use 'max-deletes' to control the number of background clean-up workers as well (#3227, #3401) - meta/tikv: add background GC worker for TiKV (#3262, #3432) - chunk: deleting an object should be idempotent (#3171) - object: set encoding type when listing objects (#3244, #3269) - object/s3: support using OVH region (#3133) - object/s3: optionally turn off 'S3Disable100Continue' via url query parameter (#3228) - deps: upgrade go version to 1.18 (#3135) - deps: upgrade BadgerDB to v3.2103.5 (#3309) - deps: upgrade golang.org/x/net to v0.7.0 (#3350) - cmd/sync: fix the issue that password is displayed in process title (#3256, #3258) - cmd/sync: fix the issue that sync fails when the source file is growing (#3405) - cmd/gateway: fix the issue that a folder object is not properly detected when using s3fs on gateway (#3378) - meta/redis: fix the issue that Redis sentinel with TLS fails hostname verification (#3194) - meta/redis: fix the issue that client may panic on doRename when a node key is lost (#3266) - meta/redis: fix the issue that keys may be unwatched unexpectedly in copy_file_range (#3400) - meta/badger: fix the issue that scanKeysRange ignores begin/end (#3138) - chunk: fix the issue that staging files may be uploaded for many times (#3157) - vfs: fix the issue that large response may not be correct when reading from .control (#3170) - fs: fix the issue that attribute cache is not invalidated after calling Utime (#3137) - object/obs: fix the issue that error is not properly handled when using OBS encrypted bucket (#3199) - object/obs&tos: fix the issue that error is not properly handled when calling RangeGet (#3270) - object/b2: fix the issue that error is not properly handled when getting info of an empty object (#3274) - object/sftp: fix the issue that the size of a symlinked file is not correct (#3426) - hadoop: fix the issue that symlink is not followed as expected (091a7bff, #3165)