Vitaliy Filippov
ff4414d37e
Test / buildenv (push) Successful in 11s
Details
Test / build (push) Successful in 3m17s
Details
Test / test_cas (push) Successful in 11s
Details
Test / make_test (push) Successful in 37s
Details
Test / test_change_pg_count (push) Successful in 36s
Details
Test / test_change_pg_size (push) Successful in 8s
Details
Test / test_change_pg_count_ec (push) Successful in 33s
Details
Test / test_create_nomaxid (push) Successful in 7s
Details
Test / test_etcd_fail (push) Successful in 53s
Details
Test / test_add_osd (push) Successful in 2m38s
Details
Test / test_interrupted_rebalance (push) Successful in 2m56s
Details
Test / test_failure_domain (push) Successful in 11s
Details
Test / test_interrupted_rebalance_imm (push) Successful in 3m6s
Details
Test / test_interrupted_rebalance_ec (push) Successful in 2m37s
Details
Test / test_snapshot_ec (push) Successful in 36s
Details
Test / test_snapshot (push) Successful in 38s
Details
Test / test_minsize_1 (push) Successful in 19s
Details
Test / test_rm (push) Successful in 16s
Details
Test / test_move_reappear (push) Successful in 22s
Details
Test / test_interrupted_rebalance_ec_imm (push) Successful in 2m27s
Details
Test / test_snapshot_down (push) Successful in 24s
Details
Test / test_snapshot_down_ec (push) Successful in 25s
Details
Test / test_splitbrain (push) Successful in 18s
Details
Test / test_snapshot_chain (push) Successful in 2m31s
Details
Test / test_snapshot_chain_ec (push) Successful in 3m1s
Details
Test / test_rebalance_verify_imm (push) Successful in 3m25s
Details
Test / test_root_node (push) Successful in 8s
Details
Test / test_rebalance_verify (push) Successful in 4m4s
Details
Test / test_switch_primary (push) Successful in 34s
Details
Test / test_write (push) Successful in 54s
Details
Test / test_write_xor (push) Successful in 52s
Details
Test / test_write_no_same (push) Successful in 13s
Details
Test / test_rebalance_verify_ec (push) Successful in 4m13s
Details
Test / test_rebalance_verify_ec_imm (push) Successful in 4m9s
Details
Test / test_heal_pg_size_2 (push) Successful in 4m30s
Details
Test / test_heal_ec (push) Successful in 5m16s
Details
Test / test_heal_csum_32k_dmj (push) Successful in 6m21s
Details
Test / test_heal_csum_32k_dj (push) Successful in 5m40s
Details
Test / test_heal_csum_32k (push) Successful in 6m42s
Details
Test / test_osd_tags (push) Successful in 39s
Details
Test / test_heal_csum_4k_dmj (push) Successful in 6m35s
Details
Test / test_enospc (push) Successful in 1m43s
Details
Test / test_heal_csum_4k (push) Successful in 6m27s
Details
Test / test_heal_csum_4k_dj (push) Successful in 6m31s
Details
Test / test_enospc_xor (push) Successful in 1m42s
Details
Test / test_scrub_zero_osd_2 (push) Successful in 39s
Details
Test / test_scrub (push) Successful in 41s
Details
Test / test_enospc_imm (push) Successful in 46s
Details
Test / test_enospc_imm_xor (push) Successful in 52s
Details
Test / test_scrub_xor (push) Successful in 32s
Details
Test / test_scrub_pg_size_6_pg_minsize_4_osd_count_6_ec (push) Successful in 39s
Details
Test / test_scrub_ec (push) Successful in 34s
Details
Test / test_nfs (push) Successful in 16s
Details
Test / test_scrub_pg_size_3 (push) Successful in 46s
Details
New features: - Implement "hierarchical failure domains" and other complex distribution rules, for example EC 4+2 over 3 DC, with 2 chunks per each DC ([documentation](docs/config/pool.en.md#level_placement)) - Make OSDs handle ENOSPC - now cluster stays online even if some OSDs fill up to 100 %, only writes requiring free space hang - Implement Stage/Unstage & volume locking for CSI to prevent parallel mounting and/or modifications of the same volume - Warn about full and almost full OSDs in vitastor-cli status - Add an experimental NBD netlink map mode as an option ([documentation](docs/usage/nbd.en.md)) - Add --pg parameter to vitastor-cli describe, print objects with 0x in human-readable format too - Add [administration docs](docs/usage/admin.en.md) Bug fixes: - Fix client operation retry timeout - previously the timeout wasn't applied and writes were retries almost instantly - Fix monitors crashing on invalid pool configurations - Fix journaling - make each journal write wait for all previous journal writes - Fix monitor thinking that OSD weight is 0 after deleting /osd/config/ key online - Fix a write stall caused by flusher possibly not trimming journal on rollback - Set 32k csum_block_size for HDD by default in vitastor-disk |
||
---|---|---|
.gitea/workflows | ||
cpp-btree@8de8b467ac | ||
csi | ||
debian | ||
docker | ||
docs | ||
json11@fd37016cf8 | ||
mon | ||
patches | ||
rpm | ||
src | ||
tests | ||
.dockerignore | ||
.gitignore | ||
.gitmodules | ||
CLA-en.md | ||
CLA-ru.md | ||
CMakeLists.txt | ||
GPL-2.0.txt | ||
LICENSE | ||
README-ru.md | ||
README.md | ||
VNPL-1.1-RU.txt | ||
VNPL-1.1.txt | ||
copy-fio-includes.sh | ||
copy-qemu-includes.sh | ||
pull_request_template.yml |
README.md
Vitastor
The Idea
Make Clustered Block Storage Fast Again.
Vitastor is a distributed block and file SDS, direct replacement of Ceph RBD and CephFS, and also internal SDS's of public clouds. However, in contrast to them, Vitastor is fast and simple at the same time. The only thing is it's slightly young :-).
Vitastor is architecturally similar to Ceph which means strong consistency, primary-replication, symmetric clustering and automatic data distribution over any number of drives of any size with configurable redundancy (replication or erasure codes/XOR).
Vitastor targets primarily SSD and SSD+HDD clusters with at least 10 Gbit/s network, supports TCP and RDMA and may achieve 4 KB read and write latency as low as ~0.1 ms with proper hardware which is ~10 times faster than other popular SDS's like Ceph or internal systems of public clouds.
Vitastor supports QEMU, NBD, NFS protocols, OpenStack, Proxmox, Kubernetes drivers. More drivers may be created easily.
Read more details below in the documentation.
Talks and presentations
- DevOpsConf'2021: presentation (in Russian, in English), video
- Highload'2022: presentation (in Russian), video
Documentation
- Introduction
- Installation
- Configuration
- Usage
- vitastor-cli (command-line interface)
- vitastor-disk (disk management tool)
- fio for benchmarks
- NBD for kernel mounts
- QEMU and qemu-img
- NFS clustered file system and pseudo-FS proxy
- Administration
- Performance
Author and License
Copyright (c) Vitaliy Filippov (vitalif [at] yourcmc.ru), 2019+
Join Vitastor Telegram Chat: https://t.me/vitastor
All server-side code (OSD, Monitor and so on) is licensed under the terms of Vitastor Network Public License 1.1 (VNPL 1.1), a copyleft license based on GNU GPLv3.0 with the additional "Network Interaction" clause which requires opensourcing all programs directly or indirectly interacting with Vitastor through a computer network and expressly designed to be used in conjunction with it ("Proxy Programs"). Proxy Programs may be made public not only under the terms of the same license, but also under the terms of any GPL-Compatible Free Software License, as listed by the Free Software Foundation. This is a stricter copyleft license than the Affero GPL.
Please note that VNPL doesn't require you to open the code of proprietary software running inside a VM if it's not specially designed to be used with Vitastor.
Basically, you can't use the software in a proprietary environment to provide its functionality to users without opensourcing all intermediary components standing between the user and Vitastor or purchasing a commercial license from the author 😀.
Client libraries (cluster_client and so on) are dual-licensed under the same VNPL 1.1 and also GNU GPL 2.0 or later to allow for compatibility with GPLed software like QEMU and fio.
You can find the full text of VNPL-1.1 in the file VNPL-1.1.txt. GPL 2.0 is also included in this repository as GPL-2.0.txt.